00:00:00.001 Started by upstream project "autotest-per-patch" build number 121034 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.056 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.056 The recommended git tool is: git 00:00:00.056 using credential 00000000-0000-0000-0000-000000000002 00:00:00.060 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.088 Fetching changes from the remote Git repository 00:00:00.090 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.128 Using shallow fetch with depth 1 00:00:00.128 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.128 > git --version # timeout=10 00:00:00.176 > git --version # 'git version 2.39.2' 00:00:00.176 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.176 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.176 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.723 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.734 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.745 Checking out Revision 6e1fadd1eee50389429f9abb33dde5face8ca717 (FETCH_HEAD) 00:00:03.745 > git config core.sparsecheckout # timeout=10 00:00:03.757 > git read-tree -mu HEAD # timeout=10 00:00:03.771 > git checkout -f 6e1fadd1eee50389429f9abb33dde5face8ca717 # timeout=5 00:00:03.788 Commit message: "pool: attach build logs for failed merge builds" 00:00:03.788 > git rev-list --no-walk 6e1fadd1eee50389429f9abb33dde5face8ca717 # timeout=10 00:00:03.874 [Pipeline] Start of Pipeline 00:00:03.886 [Pipeline] library 00:00:03.888 Loading library shm_lib@master 00:00:03.888 Library shm_lib@master is cached. Copying from home. 00:00:03.904 [Pipeline] node 00:00:03.911 Running on GP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.913 [Pipeline] { 00:00:03.923 [Pipeline] catchError 00:00:03.925 [Pipeline] { 00:00:03.941 [Pipeline] wrap 00:00:03.951 [Pipeline] { 00:00:03.959 [Pipeline] stage 00:00:03.961 [Pipeline] { (Prologue) 00:00:04.138 [Pipeline] sh 00:00:04.419 + logger -p user.info -t JENKINS-CI 00:00:04.438 [Pipeline] echo 00:00:04.439 Node: GP8 00:00:04.448 [Pipeline] sh 00:00:04.744 [Pipeline] setCustomBuildProperty 00:00:04.757 [Pipeline] echo 00:00:04.759 Cleanup processes 00:00:04.765 [Pipeline] sh 00:00:05.046 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.304 3738949 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.313 [Pipeline] sh 00:00:05.588 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.589 ++ grep -v 'sudo pgrep' 00:00:05.589 ++ awk '{print $1}' 00:00:05.589 + sudo kill -9 00:00:05.589 + true 00:00:05.602 [Pipeline] cleanWs 00:00:05.611 [WS-CLEANUP] Deleting project workspace... 00:00:05.611 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.617 [WS-CLEANUP] done 00:00:05.622 [Pipeline] setCustomBuildProperty 00:00:05.639 [Pipeline] sh 00:00:05.912 + sudo git config --global --replace-all safe.directory '*' 00:00:05.982 [Pipeline] nodesByLabel 00:00:05.983 Found a total of 1 nodes with the 'sorcerer' label 00:00:05.994 [Pipeline] httpRequest 00:00:05.999 HttpMethod: GET 00:00:06.000 URL: http://10.211.164.96/packages/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:06.005 Sending request to url: http://10.211.164.96/packages/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:06.011 Response Code: HTTP/1.1 200 OK 00:00:06.011 Success: Status code 200 is in the accepted range: 200,404 00:00:06.012 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:16.296 [Pipeline] sh 00:00:16.577 + tar --no-same-owner -xf jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:16.594 [Pipeline] httpRequest 00:00:16.598 HttpMethod: GET 00:00:16.599 URL: http://10.211.164.96/packages/spdk_4907d15656c12273dfe0c9bfdb03f10b212689b8.tar.gz 00:00:16.600 Sending request to url: http://10.211.164.96/packages/spdk_4907d15656c12273dfe0c9bfdb03f10b212689b8.tar.gz 00:00:16.609 Response Code: HTTP/1.1 200 OK 00:00:16.610 Success: Status code 200 is in the accepted range: 200,404 00:00:16.610 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_4907d15656c12273dfe0c9bfdb03f10b212689b8.tar.gz 00:01:33.775 [Pipeline] sh 00:01:34.057 + tar --no-same-owner -xf spdk_4907d15656c12273dfe0c9bfdb03f10b212689b8.tar.gz 00:01:38.259 [Pipeline] sh 00:01:38.568 + git -C spdk log --oneline -n5 00:01:38.568 4907d1565 lib/nvmf: deprecate [listen_]address.transport 00:01:38.568 ea150257d nvmf/rpc: fix input validation for nvmf_subsystem_add_listener 00:01:38.568 dd57ed3e8 sma: add listener check on vfio device creation 00:01:38.568 d36d2b7e8 doc: mark adrfam as optional 00:01:38.568 129e6ba3b test/nvmf: add missing remove listener discovery 00:01:38.583 [Pipeline] } 00:01:38.602 [Pipeline] // stage 00:01:38.613 [Pipeline] stage 00:01:38.616 [Pipeline] { (Prepare) 00:01:38.638 [Pipeline] writeFile 00:01:38.655 [Pipeline] sh 00:01:38.935 + logger -p user.info -t JENKINS-CI 00:01:38.946 [Pipeline] sh 00:01:39.226 + logger -p user.info -t JENKINS-CI 00:01:39.237 [Pipeline] sh 00:01:39.516 + cat autorun-spdk.conf 00:01:39.517 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.517 SPDK_TEST_NVMF=1 00:01:39.517 SPDK_TEST_NVME_CLI=1 00:01:39.517 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:39.517 SPDK_TEST_NVMF_NICS=e810 00:01:39.517 SPDK_TEST_VFIOUSER=1 00:01:39.517 SPDK_RUN_UBSAN=1 00:01:39.517 NET_TYPE=phy 00:01:39.524 RUN_NIGHTLY=0 00:01:39.529 [Pipeline] readFile 00:01:39.552 [Pipeline] withEnv 00:01:39.554 [Pipeline] { 00:01:39.567 [Pipeline] sh 00:01:39.848 + set -ex 00:01:39.848 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:39.848 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:39.848 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.848 ++ SPDK_TEST_NVMF=1 00:01:39.848 ++ SPDK_TEST_NVME_CLI=1 00:01:39.848 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:39.848 ++ SPDK_TEST_NVMF_NICS=e810 00:01:39.849 ++ SPDK_TEST_VFIOUSER=1 00:01:39.849 ++ SPDK_RUN_UBSAN=1 00:01:39.849 ++ NET_TYPE=phy 00:01:39.849 ++ RUN_NIGHTLY=0 00:01:39.849 + case $SPDK_TEST_NVMF_NICS in 00:01:39.849 + DRIVERS=ice 00:01:39.849 + [[ tcp == \r\d\m\a ]] 00:01:39.849 + [[ -n ice ]] 00:01:39.849 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:40.107 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:40.107 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:40.107 rmmod: ERROR: Module irdma is not currently loaded 00:01:40.107 rmmod: ERROR: Module i40iw is not currently loaded 00:01:40.107 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:40.107 + true 00:01:40.107 + for D in $DRIVERS 00:01:40.107 + sudo modprobe ice 00:01:40.107 + exit 0 00:01:40.116 [Pipeline] } 00:01:40.136 [Pipeline] // withEnv 00:01:40.142 [Pipeline] } 00:01:40.161 [Pipeline] // stage 00:01:40.170 [Pipeline] catchError 00:01:40.172 [Pipeline] { 00:01:40.187 [Pipeline] timeout 00:01:40.188 Timeout set to expire in 40 min 00:01:40.190 [Pipeline] { 00:01:40.206 [Pipeline] stage 00:01:40.208 [Pipeline] { (Tests) 00:01:40.227 [Pipeline] sh 00:01:40.510 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:40.511 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:40.511 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:40.511 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:40.511 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:40.511 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:40.511 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:40.511 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:40.511 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:40.511 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:40.511 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:40.511 + source /etc/os-release 00:01:40.511 ++ NAME='Fedora Linux' 00:01:40.511 ++ VERSION='38 (Cloud Edition)' 00:01:40.511 ++ ID=fedora 00:01:40.511 ++ VERSION_ID=38 00:01:40.511 ++ VERSION_CODENAME= 00:01:40.511 ++ PLATFORM_ID=platform:f38 00:01:40.511 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:40.511 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:40.511 ++ LOGO=fedora-logo-icon 00:01:40.511 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:40.511 ++ HOME_URL=https://fedoraproject.org/ 00:01:40.511 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:40.511 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:40.511 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:40.511 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:40.511 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:40.511 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:40.511 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:40.511 ++ SUPPORT_END=2024-05-14 00:01:40.511 ++ VARIANT='Cloud Edition' 00:01:40.511 ++ VARIANT_ID=cloud 00:01:40.511 + uname -a 00:01:40.511 Linux spdk-gp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:40.511 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:41.887 Hugepages 00:01:41.887 node hugesize free / total 00:01:41.887 node0 1048576kB 0 / 0 00:01:41.887 node0 2048kB 0 / 0 00:01:41.887 node1 1048576kB 0 / 0 00:01:41.887 node1 2048kB 0 / 0 00:01:41.887 00:01:41.887 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:41.887 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:41.887 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:41.887 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:41.887 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:41.887 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:41.887 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:41.887 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:41.887 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:41.887 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:41.887 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:41.887 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:41.887 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:41.887 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:41.887 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:41.887 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:41.887 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:41.887 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:41.887 + rm -f /tmp/spdk-ld-path 00:01:41.887 + source autorun-spdk.conf 00:01:41.887 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.887 ++ SPDK_TEST_NVMF=1 00:01:41.887 ++ SPDK_TEST_NVME_CLI=1 00:01:41.887 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:41.887 ++ SPDK_TEST_NVMF_NICS=e810 00:01:41.887 ++ SPDK_TEST_VFIOUSER=1 00:01:41.887 ++ SPDK_RUN_UBSAN=1 00:01:41.887 ++ NET_TYPE=phy 00:01:41.887 ++ RUN_NIGHTLY=0 00:01:41.887 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:41.887 + [[ -n '' ]] 00:01:41.887 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:41.887 + for M in /var/spdk/build-*-manifest.txt 00:01:41.887 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:41.887 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:41.887 + for M in /var/spdk/build-*-manifest.txt 00:01:41.887 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:41.887 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:41.887 ++ uname 00:01:41.887 + [[ Linux == \L\i\n\u\x ]] 00:01:41.887 + sudo dmesg -T 00:01:41.887 + sudo dmesg --clear 00:01:41.887 + dmesg_pid=3739668 00:01:41.887 + [[ Fedora Linux == FreeBSD ]] 00:01:41.887 + sudo dmesg -Tw 00:01:41.887 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:41.887 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:41.887 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:41.887 + [[ -x /usr/src/fio-static/fio ]] 00:01:41.887 + export FIO_BIN=/usr/src/fio-static/fio 00:01:41.887 + FIO_BIN=/usr/src/fio-static/fio 00:01:41.887 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:41.887 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:41.887 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:41.887 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:41.887 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:41.887 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:41.887 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:41.887 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:41.887 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:41.887 Test configuration: 00:01:41.887 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.887 SPDK_TEST_NVMF=1 00:01:41.887 SPDK_TEST_NVME_CLI=1 00:01:41.887 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:41.887 SPDK_TEST_NVMF_NICS=e810 00:01:41.887 SPDK_TEST_VFIOUSER=1 00:01:41.887 SPDK_RUN_UBSAN=1 00:01:41.887 NET_TYPE=phy 00:01:41.887 RUN_NIGHTLY=0 21:53:24 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:41.887 21:53:24 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:41.887 21:53:24 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:41.887 21:53:24 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:41.887 21:53:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.887 21:53:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.887 21:53:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.887 21:53:24 -- paths/export.sh@5 -- $ export PATH 00:01:41.887 21:53:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.887 21:53:24 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:41.887 21:53:24 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:41.887 21:53:24 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713988404.XXXXXX 00:01:41.887 21:53:24 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713988404.6K18Hh 00:01:41.887 21:53:24 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:41.887 21:53:24 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:41.887 21:53:24 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:41.887 21:53:24 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:41.887 21:53:24 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:41.887 21:53:24 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:41.887 21:53:24 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:01:41.887 21:53:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.887 21:53:24 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:41.887 21:53:24 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:01:41.887 21:53:24 -- pm/common@17 -- $ local monitor 00:01:41.887 21:53:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.887 21:53:24 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3739702 00:01:41.887 21:53:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.887 21:53:24 -- pm/common@21 -- $ date +%s 00:01:41.887 21:53:24 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3739704 00:01:41.887 21:53:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.887 21:53:24 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3739707 00:01:41.887 21:53:24 -- pm/common@21 -- $ date +%s 00:01:41.887 21:53:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.887 21:53:24 -- pm/common@21 -- $ date +%s 00:01:41.887 21:53:24 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3739709 00:01:41.887 21:53:24 -- pm/common@26 -- $ sleep 1 00:01:41.887 21:53:24 -- pm/common@21 -- $ date +%s 00:01:41.887 21:53:24 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713988404 00:01:41.887 21:53:24 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713988404 00:01:41.887 21:53:24 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713988404 00:01:41.887 21:53:24 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713988404 00:01:42.147 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713988404_collect-bmc-pm.bmc.pm.log 00:01:42.147 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713988404_collect-cpu-load.pm.log 00:01:42.147 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713988404_collect-vmstat.pm.log 00:01:42.147 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713988404_collect-cpu-temp.pm.log 00:01:43.084 21:53:25 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:01:43.084 21:53:25 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:43.084 21:53:25 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:43.084 21:53:25 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:43.084 21:53:25 -- spdk/autobuild.sh@16 -- $ date -u 00:01:43.084 Wed Apr 24 07:53:25 PM UTC 2024 00:01:43.084 21:53:25 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:43.084 v24.05-pre-415-g4907d1565 00:01:43.084 21:53:25 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:43.084 21:53:25 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:43.084 21:53:25 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:43.084 21:53:25 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:43.084 21:53:25 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:43.084 21:53:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.343 ************************************ 00:01:43.343 START TEST ubsan 00:01:43.343 ************************************ 00:01:43.343 21:53:25 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:01:43.343 using ubsan 00:01:43.343 00:01:43.343 real 0m0.000s 00:01:43.343 user 0m0.000s 00:01:43.343 sys 0m0.000s 00:01:43.343 21:53:25 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:43.343 21:53:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.343 ************************************ 00:01:43.343 END TEST ubsan 00:01:43.343 ************************************ 00:01:43.343 21:53:25 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:43.343 21:53:25 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:43.343 21:53:25 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:43.343 21:53:25 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:43.343 21:53:25 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:43.343 21:53:25 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:43.343 21:53:25 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:43.343 21:53:25 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:43.343 21:53:25 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:43.343 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:43.343 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:43.602 Using 'verbs' RDMA provider 00:01:56.380 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:08.643 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:08.643 Creating mk/config.mk...done. 00:02:08.643 Creating mk/cc.flags.mk...done. 00:02:08.643 Type 'make' to build. 00:02:08.643 21:53:49 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:02:08.643 21:53:49 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:08.643 21:53:49 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:08.643 21:53:49 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.643 ************************************ 00:02:08.643 START TEST make 00:02:08.643 ************************************ 00:02:08.643 21:53:49 -- common/autotest_common.sh@1111 -- $ make -j48 00:02:08.643 make[1]: Nothing to be done for 'all'. 00:02:09.214 The Meson build system 00:02:09.214 Version: 1.3.1 00:02:09.214 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:09.214 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:09.214 Build type: native build 00:02:09.214 Project name: libvfio-user 00:02:09.214 Project version: 0.0.1 00:02:09.214 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:09.214 C linker for the host machine: cc ld.bfd 2.39-16 00:02:09.214 Host machine cpu family: x86_64 00:02:09.214 Host machine cpu: x86_64 00:02:09.214 Run-time dependency threads found: YES 00:02:09.214 Library dl found: YES 00:02:09.214 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:09.214 Run-time dependency json-c found: YES 0.17 00:02:09.214 Run-time dependency cmocka found: YES 1.1.7 00:02:09.214 Program pytest-3 found: NO 00:02:09.214 Program flake8 found: NO 00:02:09.214 Program misspell-fixer found: NO 00:02:09.214 Program restructuredtext-lint found: NO 00:02:09.214 Program valgrind found: YES (/usr/bin/valgrind) 00:02:09.214 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:09.214 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:09.214 Compiler for C supports arguments -Wwrite-strings: YES 00:02:09.214 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:09.214 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:09.214 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:09.214 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:09.214 Build targets in project: 8 00:02:09.214 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:09.214 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:09.214 00:02:09.214 libvfio-user 0.0.1 00:02:09.214 00:02:09.214 User defined options 00:02:09.214 buildtype : debug 00:02:09.214 default_library: shared 00:02:09.214 libdir : /usr/local/lib 00:02:09.214 00:02:09.214 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:10.158 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:10.158 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:10.158 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:10.158 [3/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:10.158 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:10.158 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:10.158 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:10.158 [7/37] Compiling C object samples/null.p/null.c.o 00:02:10.158 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:10.158 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:10.158 [10/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:10.158 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:10.158 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:10.420 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:10.420 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:10.420 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:10.420 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:10.420 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:10.421 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:10.421 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:10.421 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:10.421 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:10.421 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:10.421 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:10.421 [24/37] Compiling C object samples/server.p/server.c.o 00:02:10.421 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:10.421 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:10.421 [27/37] Compiling C object samples/client.p/client.c.o 00:02:10.421 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:02:10.421 [29/37] Linking target samples/client 00:02:10.683 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:10.683 [31/37] Linking target test/unit_tests 00:02:10.683 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:10.942 [33/37] Linking target samples/server 00:02:10.942 [34/37] Linking target samples/shadow_ioeventfd_server 00:02:10.942 [35/37] Linking target samples/null 00:02:10.942 [36/37] Linking target samples/gpio-pci-idio-16 00:02:10.942 [37/37] Linking target samples/lspci 00:02:10.942 INFO: autodetecting backend as ninja 00:02:10.942 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:10.942 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:11.665 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:11.665 ninja: no work to do. 00:02:16.932 The Meson build system 00:02:16.932 Version: 1.3.1 00:02:16.932 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:16.932 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:16.932 Build type: native build 00:02:16.932 Program cat found: YES (/usr/bin/cat) 00:02:16.932 Project name: DPDK 00:02:16.932 Project version: 23.11.0 00:02:16.932 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:16.932 C linker for the host machine: cc ld.bfd 2.39-16 00:02:16.932 Host machine cpu family: x86_64 00:02:16.932 Host machine cpu: x86_64 00:02:16.932 Message: ## Building in Developer Mode ## 00:02:16.932 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:16.932 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:16.932 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:16.932 Program python3 found: YES (/usr/bin/python3) 00:02:16.932 Program cat found: YES (/usr/bin/cat) 00:02:16.932 Compiler for C supports arguments -march=native: YES 00:02:16.932 Checking for size of "void *" : 8 00:02:16.932 Checking for size of "void *" : 8 (cached) 00:02:16.932 Library m found: YES 00:02:16.932 Library numa found: YES 00:02:16.932 Has header "numaif.h" : YES 00:02:16.932 Library fdt found: NO 00:02:16.932 Library execinfo found: NO 00:02:16.932 Has header "execinfo.h" : YES 00:02:16.932 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:16.932 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:16.932 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:16.932 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:16.932 Run-time dependency openssl found: YES 3.0.9 00:02:16.932 Run-time dependency libpcap found: YES 1.10.4 00:02:16.932 Has header "pcap.h" with dependency libpcap: YES 00:02:16.932 Compiler for C supports arguments -Wcast-qual: YES 00:02:16.932 Compiler for C supports arguments -Wdeprecated: YES 00:02:16.932 Compiler for C supports arguments -Wformat: YES 00:02:16.932 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:16.932 Compiler for C supports arguments -Wformat-security: NO 00:02:16.932 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:16.932 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:16.932 Compiler for C supports arguments -Wnested-externs: YES 00:02:16.932 Compiler for C supports arguments -Wold-style-definition: YES 00:02:16.932 Compiler for C supports arguments -Wpointer-arith: YES 00:02:16.932 Compiler for C supports arguments -Wsign-compare: YES 00:02:16.932 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:16.932 Compiler for C supports arguments -Wundef: YES 00:02:16.932 Compiler for C supports arguments -Wwrite-strings: YES 00:02:16.932 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:16.932 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:16.932 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:16.932 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:16.932 Program objdump found: YES (/usr/bin/objdump) 00:02:16.932 Compiler for C supports arguments -mavx512f: YES 00:02:16.932 Checking if "AVX512 checking" compiles: YES 00:02:16.932 Fetching value of define "__SSE4_2__" : 1 00:02:16.932 Fetching value of define "__AES__" : 1 00:02:16.932 Fetching value of define "__AVX__" : 1 00:02:16.932 Fetching value of define "__AVX2__" : (undefined) 00:02:16.932 Fetching value of define "__AVX512BW__" : (undefined) 00:02:16.932 Fetching value of define "__AVX512CD__" : (undefined) 00:02:16.932 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:16.932 Fetching value of define "__AVX512F__" : (undefined) 00:02:16.932 Fetching value of define "__AVX512VL__" : (undefined) 00:02:16.932 Fetching value of define "__PCLMUL__" : 1 00:02:16.932 Fetching value of define "__RDRND__" : 1 00:02:16.932 Fetching value of define "__RDSEED__" : (undefined) 00:02:16.932 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:16.932 Fetching value of define "__znver1__" : (undefined) 00:02:16.932 Fetching value of define "__znver2__" : (undefined) 00:02:16.932 Fetching value of define "__znver3__" : (undefined) 00:02:16.932 Fetching value of define "__znver4__" : (undefined) 00:02:16.932 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:16.932 Message: lib/log: Defining dependency "log" 00:02:16.932 Message: lib/kvargs: Defining dependency "kvargs" 00:02:16.932 Message: lib/telemetry: Defining dependency "telemetry" 00:02:16.932 Checking for function "getentropy" : NO 00:02:16.932 Message: lib/eal: Defining dependency "eal" 00:02:16.932 Message: lib/ring: Defining dependency "ring" 00:02:16.932 Message: lib/rcu: Defining dependency "rcu" 00:02:16.932 Message: lib/mempool: Defining dependency "mempool" 00:02:16.932 Message: lib/mbuf: Defining dependency "mbuf" 00:02:16.932 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:16.932 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:16.932 Compiler for C supports arguments -mpclmul: YES 00:02:16.932 Compiler for C supports arguments -maes: YES 00:02:16.932 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:16.932 Compiler for C supports arguments -mavx512bw: YES 00:02:16.932 Compiler for C supports arguments -mavx512dq: YES 00:02:16.932 Compiler for C supports arguments -mavx512vl: YES 00:02:16.932 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:16.932 Compiler for C supports arguments -mavx2: YES 00:02:16.932 Compiler for C supports arguments -mavx: YES 00:02:16.932 Message: lib/net: Defining dependency "net" 00:02:16.932 Message: lib/meter: Defining dependency "meter" 00:02:16.932 Message: lib/ethdev: Defining dependency "ethdev" 00:02:16.932 Message: lib/pci: Defining dependency "pci" 00:02:16.932 Message: lib/cmdline: Defining dependency "cmdline" 00:02:16.932 Message: lib/hash: Defining dependency "hash" 00:02:16.933 Message: lib/timer: Defining dependency "timer" 00:02:16.933 Message: lib/compressdev: Defining dependency "compressdev" 00:02:16.933 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:16.933 Message: lib/dmadev: Defining dependency "dmadev" 00:02:16.933 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:16.933 Message: lib/power: Defining dependency "power" 00:02:16.933 Message: lib/reorder: Defining dependency "reorder" 00:02:16.933 Message: lib/security: Defining dependency "security" 00:02:16.933 Has header "linux/userfaultfd.h" : YES 00:02:16.933 Has header "linux/vduse.h" : YES 00:02:16.933 Message: lib/vhost: Defining dependency "vhost" 00:02:16.933 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:16.933 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:16.933 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:16.933 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:16.933 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:16.933 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:16.933 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:16.933 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:16.933 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:16.933 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:16.933 Program doxygen found: YES (/usr/bin/doxygen) 00:02:16.933 Configuring doxy-api-html.conf using configuration 00:02:16.933 Configuring doxy-api-man.conf using configuration 00:02:16.933 Program mandb found: YES (/usr/bin/mandb) 00:02:16.933 Program sphinx-build found: NO 00:02:16.933 Configuring rte_build_config.h using configuration 00:02:16.933 Message: 00:02:16.933 ================= 00:02:16.933 Applications Enabled 00:02:16.933 ================= 00:02:16.933 00:02:16.933 apps: 00:02:16.933 00:02:16.933 00:02:16.933 Message: 00:02:16.933 ================= 00:02:16.933 Libraries Enabled 00:02:16.933 ================= 00:02:16.933 00:02:16.933 libs: 00:02:16.933 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:16.933 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:16.933 cryptodev, dmadev, power, reorder, security, vhost, 00:02:16.933 00:02:16.933 Message: 00:02:16.933 =============== 00:02:16.933 Drivers Enabled 00:02:16.933 =============== 00:02:16.933 00:02:16.933 common: 00:02:16.933 00:02:16.933 bus: 00:02:16.933 pci, vdev, 00:02:16.933 mempool: 00:02:16.933 ring, 00:02:16.933 dma: 00:02:16.933 00:02:16.933 net: 00:02:16.933 00:02:16.933 crypto: 00:02:16.933 00:02:16.933 compress: 00:02:16.933 00:02:16.933 vdpa: 00:02:16.933 00:02:16.933 00:02:16.933 Message: 00:02:16.933 ================= 00:02:16.933 Content Skipped 00:02:16.933 ================= 00:02:16.933 00:02:16.933 apps: 00:02:16.933 dumpcap: explicitly disabled via build config 00:02:16.933 graph: explicitly disabled via build config 00:02:16.933 pdump: explicitly disabled via build config 00:02:16.933 proc-info: explicitly disabled via build config 00:02:16.933 test-acl: explicitly disabled via build config 00:02:16.933 test-bbdev: explicitly disabled via build config 00:02:16.933 test-cmdline: explicitly disabled via build config 00:02:16.933 test-compress-perf: explicitly disabled via build config 00:02:16.933 test-crypto-perf: explicitly disabled via build config 00:02:16.933 test-dma-perf: explicitly disabled via build config 00:02:16.933 test-eventdev: explicitly disabled via build config 00:02:16.933 test-fib: explicitly disabled via build config 00:02:16.933 test-flow-perf: explicitly disabled via build config 00:02:16.933 test-gpudev: explicitly disabled via build config 00:02:16.933 test-mldev: explicitly disabled via build config 00:02:16.933 test-pipeline: explicitly disabled via build config 00:02:16.933 test-pmd: explicitly disabled via build config 00:02:16.933 test-regex: explicitly disabled via build config 00:02:16.933 test-sad: explicitly disabled via build config 00:02:16.933 test-security-perf: explicitly disabled via build config 00:02:16.933 00:02:16.933 libs: 00:02:16.933 metrics: explicitly disabled via build config 00:02:16.933 acl: explicitly disabled via build config 00:02:16.933 bbdev: explicitly disabled via build config 00:02:16.933 bitratestats: explicitly disabled via build config 00:02:16.933 bpf: explicitly disabled via build config 00:02:16.933 cfgfile: explicitly disabled via build config 00:02:16.933 distributor: explicitly disabled via build config 00:02:16.933 efd: explicitly disabled via build config 00:02:16.933 eventdev: explicitly disabled via build config 00:02:16.933 dispatcher: explicitly disabled via build config 00:02:16.933 gpudev: explicitly disabled via build config 00:02:16.933 gro: explicitly disabled via build config 00:02:16.933 gso: explicitly disabled via build config 00:02:16.933 ip_frag: explicitly disabled via build config 00:02:16.933 jobstats: explicitly disabled via build config 00:02:16.933 latencystats: explicitly disabled via build config 00:02:16.933 lpm: explicitly disabled via build config 00:02:16.933 member: explicitly disabled via build config 00:02:16.933 pcapng: explicitly disabled via build config 00:02:16.933 rawdev: explicitly disabled via build config 00:02:16.933 regexdev: explicitly disabled via build config 00:02:16.933 mldev: explicitly disabled via build config 00:02:16.933 rib: explicitly disabled via build config 00:02:16.933 sched: explicitly disabled via build config 00:02:16.933 stack: explicitly disabled via build config 00:02:16.933 ipsec: explicitly disabled via build config 00:02:16.933 pdcp: explicitly disabled via build config 00:02:16.933 fib: explicitly disabled via build config 00:02:16.933 port: explicitly disabled via build config 00:02:16.933 pdump: explicitly disabled via build config 00:02:16.933 table: explicitly disabled via build config 00:02:16.933 pipeline: explicitly disabled via build config 00:02:16.933 graph: explicitly disabled via build config 00:02:16.933 node: explicitly disabled via build config 00:02:16.933 00:02:16.933 drivers: 00:02:16.933 common/cpt: not in enabled drivers build config 00:02:16.933 common/dpaax: not in enabled drivers build config 00:02:16.933 common/iavf: not in enabled drivers build config 00:02:16.933 common/idpf: not in enabled drivers build config 00:02:16.933 common/mvep: not in enabled drivers build config 00:02:16.933 common/octeontx: not in enabled drivers build config 00:02:16.933 bus/auxiliary: not in enabled drivers build config 00:02:16.933 bus/cdx: not in enabled drivers build config 00:02:16.933 bus/dpaa: not in enabled drivers build config 00:02:16.933 bus/fslmc: not in enabled drivers build config 00:02:16.933 bus/ifpga: not in enabled drivers build config 00:02:16.933 bus/platform: not in enabled drivers build config 00:02:16.933 bus/vmbus: not in enabled drivers build config 00:02:16.933 common/cnxk: not in enabled drivers build config 00:02:16.933 common/mlx5: not in enabled drivers build config 00:02:16.933 common/nfp: not in enabled drivers build config 00:02:16.933 common/qat: not in enabled drivers build config 00:02:16.933 common/sfc_efx: not in enabled drivers build config 00:02:16.933 mempool/bucket: not in enabled drivers build config 00:02:16.933 mempool/cnxk: not in enabled drivers build config 00:02:16.933 mempool/dpaa: not in enabled drivers build config 00:02:16.933 mempool/dpaa2: not in enabled drivers build config 00:02:16.933 mempool/octeontx: not in enabled drivers build config 00:02:16.933 mempool/stack: not in enabled drivers build config 00:02:16.933 dma/cnxk: not in enabled drivers build config 00:02:16.933 dma/dpaa: not in enabled drivers build config 00:02:16.933 dma/dpaa2: not in enabled drivers build config 00:02:16.933 dma/hisilicon: not in enabled drivers build config 00:02:16.933 dma/idxd: not in enabled drivers build config 00:02:16.933 dma/ioat: not in enabled drivers build config 00:02:16.933 dma/skeleton: not in enabled drivers build config 00:02:16.933 net/af_packet: not in enabled drivers build config 00:02:16.933 net/af_xdp: not in enabled drivers build config 00:02:16.933 net/ark: not in enabled drivers build config 00:02:16.933 net/atlantic: not in enabled drivers build config 00:02:16.933 net/avp: not in enabled drivers build config 00:02:16.933 net/axgbe: not in enabled drivers build config 00:02:16.933 net/bnx2x: not in enabled drivers build config 00:02:16.933 net/bnxt: not in enabled drivers build config 00:02:16.933 net/bonding: not in enabled drivers build config 00:02:16.933 net/cnxk: not in enabled drivers build config 00:02:16.933 net/cpfl: not in enabled drivers build config 00:02:16.933 net/cxgbe: not in enabled drivers build config 00:02:16.933 net/dpaa: not in enabled drivers build config 00:02:16.933 net/dpaa2: not in enabled drivers build config 00:02:16.933 net/e1000: not in enabled drivers build config 00:02:16.933 net/ena: not in enabled drivers build config 00:02:16.933 net/enetc: not in enabled drivers build config 00:02:16.933 net/enetfec: not in enabled drivers build config 00:02:16.933 net/enic: not in enabled drivers build config 00:02:16.933 net/failsafe: not in enabled drivers build config 00:02:16.933 net/fm10k: not in enabled drivers build config 00:02:16.933 net/gve: not in enabled drivers build config 00:02:16.933 net/hinic: not in enabled drivers build config 00:02:16.933 net/hns3: not in enabled drivers build config 00:02:16.933 net/i40e: not in enabled drivers build config 00:02:16.933 net/iavf: not in enabled drivers build config 00:02:16.933 net/ice: not in enabled drivers build config 00:02:16.933 net/idpf: not in enabled drivers build config 00:02:16.933 net/igc: not in enabled drivers build config 00:02:16.933 net/ionic: not in enabled drivers build config 00:02:16.933 net/ipn3ke: not in enabled drivers build config 00:02:16.933 net/ixgbe: not in enabled drivers build config 00:02:16.933 net/mana: not in enabled drivers build config 00:02:16.933 net/memif: not in enabled drivers build config 00:02:16.933 net/mlx4: not in enabled drivers build config 00:02:16.933 net/mlx5: not in enabled drivers build config 00:02:16.934 net/mvneta: not in enabled drivers build config 00:02:16.934 net/mvpp2: not in enabled drivers build config 00:02:16.934 net/netvsc: not in enabled drivers build config 00:02:16.934 net/nfb: not in enabled drivers build config 00:02:16.934 net/nfp: not in enabled drivers build config 00:02:16.934 net/ngbe: not in enabled drivers build config 00:02:16.934 net/null: not in enabled drivers build config 00:02:16.934 net/octeontx: not in enabled drivers build config 00:02:16.934 net/octeon_ep: not in enabled drivers build config 00:02:16.934 net/pcap: not in enabled drivers build config 00:02:16.934 net/pfe: not in enabled drivers build config 00:02:16.934 net/qede: not in enabled drivers build config 00:02:16.934 net/ring: not in enabled drivers build config 00:02:16.934 net/sfc: not in enabled drivers build config 00:02:16.934 net/softnic: not in enabled drivers build config 00:02:16.934 net/tap: not in enabled drivers build config 00:02:16.934 net/thunderx: not in enabled drivers build config 00:02:16.934 net/txgbe: not in enabled drivers build config 00:02:16.934 net/vdev_netvsc: not in enabled drivers build config 00:02:16.934 net/vhost: not in enabled drivers build config 00:02:16.934 net/virtio: not in enabled drivers build config 00:02:16.934 net/vmxnet3: not in enabled drivers build config 00:02:16.934 raw/*: missing internal dependency, "rawdev" 00:02:16.934 crypto/armv8: not in enabled drivers build config 00:02:16.934 crypto/bcmfs: not in enabled drivers build config 00:02:16.934 crypto/caam_jr: not in enabled drivers build config 00:02:16.934 crypto/ccp: not in enabled drivers build config 00:02:16.934 crypto/cnxk: not in enabled drivers build config 00:02:16.934 crypto/dpaa_sec: not in enabled drivers build config 00:02:16.934 crypto/dpaa2_sec: not in enabled drivers build config 00:02:16.934 crypto/ipsec_mb: not in enabled drivers build config 00:02:16.934 crypto/mlx5: not in enabled drivers build config 00:02:16.934 crypto/mvsam: not in enabled drivers build config 00:02:16.934 crypto/nitrox: not in enabled drivers build config 00:02:16.934 crypto/null: not in enabled drivers build config 00:02:16.934 crypto/octeontx: not in enabled drivers build config 00:02:16.934 crypto/openssl: not in enabled drivers build config 00:02:16.934 crypto/scheduler: not in enabled drivers build config 00:02:16.934 crypto/uadk: not in enabled drivers build config 00:02:16.934 crypto/virtio: not in enabled drivers build config 00:02:16.934 compress/isal: not in enabled drivers build config 00:02:16.934 compress/mlx5: not in enabled drivers build config 00:02:16.934 compress/octeontx: not in enabled drivers build config 00:02:16.934 compress/zlib: not in enabled drivers build config 00:02:16.934 regex/*: missing internal dependency, "regexdev" 00:02:16.934 ml/*: missing internal dependency, "mldev" 00:02:16.934 vdpa/ifc: not in enabled drivers build config 00:02:16.934 vdpa/mlx5: not in enabled drivers build config 00:02:16.934 vdpa/nfp: not in enabled drivers build config 00:02:16.934 vdpa/sfc: not in enabled drivers build config 00:02:16.934 event/*: missing internal dependency, "eventdev" 00:02:16.934 baseband/*: missing internal dependency, "bbdev" 00:02:16.934 gpu/*: missing internal dependency, "gpudev" 00:02:16.934 00:02:16.934 00:02:16.934 Build targets in project: 85 00:02:16.934 00:02:16.934 DPDK 23.11.0 00:02:16.934 00:02:16.934 User defined options 00:02:16.934 buildtype : debug 00:02:16.934 default_library : shared 00:02:16.934 libdir : lib 00:02:16.934 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:16.934 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:16.934 c_link_args : 00:02:16.934 cpu_instruction_set: native 00:02:16.934 disable_apps : test-acl,test-bbdev,test-crypto-perf,test-fib,test-pipeline,test-gpudev,test-flow-perf,pdump,dumpcap,test-sad,test-cmdline,test-eventdev,proc-info,test,test-dma-perf,test-pmd,test-mldev,test-compress-perf,test-security-perf,graph,test-regex 00:02:16.934 disable_libs : pipeline,member,eventdev,efd,bbdev,cfgfile,rib,sched,mldev,metrics,lpm,latencystats,pdump,pdcp,bpf,ipsec,fib,ip_frag,table,port,stack,gro,jobstats,regexdev,rawdev,pcapng,dispatcher,node,bitratestats,acl,gpudev,distributor,graph,gso 00:02:16.934 enable_docs : false 00:02:16.934 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:16.934 enable_kmods : false 00:02:16.934 tests : false 00:02:16.934 00:02:16.934 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:17.507 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:17.507 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:17.507 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:17.507 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:17.507 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:17.507 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:17.507 [6/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:17.507 [7/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:17.507 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:17.507 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:17.507 [10/265] Linking static target lib/librte_kvargs.a 00:02:17.507 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:17.507 [12/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:17.767 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:17.767 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:17.767 [15/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:17.767 [16/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:17.767 [17/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:17.767 [18/265] Linking static target lib/librte_log.a 00:02:17.767 [19/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:17.767 [20/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:17.767 [21/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:18.341 [22/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.341 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:18.341 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:18.341 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:18.341 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:18.341 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:18.341 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:18.341 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:18.341 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:18.341 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:18.341 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:18.599 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:18.599 [34/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:18.599 [35/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:18.599 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:18.599 [37/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:18.599 [38/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:18.599 [39/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:18.599 [40/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:18.599 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:18.599 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:18.599 [43/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:18.599 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:18.599 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:18.599 [46/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:18.599 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:18.599 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:18.599 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:18.599 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:18.599 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:18.599 [52/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:18.599 [53/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:18.599 [54/265] Linking static target lib/librte_telemetry.a 00:02:18.599 [55/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:18.599 [56/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:18.599 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:18.599 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:18.857 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:18.857 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:18.857 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:18.857 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:18.857 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:18.857 [64/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:18.857 [65/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:18.857 [66/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:18.858 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:18.858 [68/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:18.858 [69/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:18.858 [70/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:18.858 [71/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.858 [72/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:18.858 [73/265] Linking static target lib/librte_pci.a 00:02:18.858 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:19.120 [75/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:19.120 [76/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:19.120 [77/265] Linking target lib/librte_log.so.24.0 00:02:19.120 [78/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:19.120 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:19.120 [80/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:19.120 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:19.120 [82/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:19.120 [83/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:19.120 [84/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:19.120 [85/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:19.120 [86/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:19.381 [87/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:19.381 [88/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:19.381 [89/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:19.381 [90/265] Linking target lib/librte_kvargs.so.24.0 00:02:19.381 [91/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:19.381 [92/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:19.381 [93/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:19.381 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:19.381 [95/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:19.381 [96/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.381 [97/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:19.381 [98/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:19.643 [99/265] Linking static target lib/librte_ring.a 00:02:19.643 [100/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:19.643 [101/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:19.643 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:19.643 [103/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:19.643 [104/265] Linking static target lib/librte_eal.a 00:02:19.643 [105/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:19.643 [106/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:19.643 [107/265] Linking static target lib/librte_rcu.a 00:02:19.643 [108/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:19.643 [109/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:19.643 [110/265] Linking static target lib/librte_meter.a 00:02:19.643 [111/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:19.643 [112/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:19.643 [113/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.643 [114/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:19.643 [115/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:19.643 [116/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:19.643 [117/265] Linking static target lib/librte_mempool.a 00:02:19.903 [118/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:19.903 [119/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:19.903 [120/265] Linking target lib/librte_telemetry.so.24.0 00:02:19.903 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:19.903 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:19.903 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:19.903 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:19.903 [125/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:19.903 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:19.903 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:20.162 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:20.162 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:20.162 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:20.162 [131/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.162 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:20.162 [133/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:20.162 [134/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:20.162 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:20.162 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:20.162 [137/265] Linking static target lib/librte_net.a 00:02:20.162 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:20.162 [139/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:20.162 [140/265] Linking static target lib/librte_cmdline.a 00:02:20.162 [141/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:20.162 [142/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:20.427 [143/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.427 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:20.427 [145/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.427 [146/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:20.427 [147/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:20.427 [148/265] Linking static target lib/librte_timer.a 00:02:20.427 [149/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:20.427 [150/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:20.427 [151/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:20.427 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:20.685 [153/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:20.685 [154/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:20.685 [155/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:20.685 [156/265] Linking static target lib/librte_dmadev.a 00:02:20.685 [157/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.685 [158/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:20.685 [159/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:20.685 [160/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:20.685 [161/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:20.685 [162/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:20.944 [163/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:20.944 [164/265] Linking static target lib/librte_compressdev.a 00:02:20.944 [165/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.944 [166/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.944 [167/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:20.944 [168/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:20.944 [169/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:20.944 [170/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:20.944 [171/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:20.944 [172/265] Linking static target lib/librte_hash.a 00:02:20.944 [173/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.944 [174/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:21.201 [175/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:21.201 [176/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:21.201 [177/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:21.201 [178/265] Linking static target lib/librte_power.a 00:02:21.201 [179/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.201 [180/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:21.201 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:21.201 [182/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:21.201 [183/265] Linking static target lib/librte_mbuf.a 00:02:21.201 [184/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:21.201 [185/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:21.201 [186/265] Linking static target lib/librte_reorder.a 00:02:21.201 [187/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:21.202 [188/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.202 [189/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:21.202 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:21.202 [191/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:21.202 [192/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:21.459 [193/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:21.459 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:21.459 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:21.459 [196/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:21.459 [197/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:21.459 [198/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.459 [199/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.459 [200/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:21.459 [201/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:21.459 [202/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:21.459 [203/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:21.459 [204/265] Linking static target drivers/librte_bus_vdev.a 00:02:21.459 [205/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:21.459 [206/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:21.459 [207/265] Linking static target drivers/librte_bus_pci.a 00:02:21.717 [208/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:21.717 [209/265] Linking static target lib/librte_security.a 00:02:21.717 [210/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:21.717 [211/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.717 [212/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:21.717 [213/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.717 [214/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:21.717 [215/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:21.717 [216/265] Linking static target drivers/librte_mempool_ring.a 00:02:21.717 [217/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:21.717 [218/265] Linking static target lib/librte_cryptodev.a 00:02:21.717 [219/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.975 [220/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.975 [221/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.975 [222/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:21.975 [223/265] Linking static target lib/librte_ethdev.a 00:02:22.908 [224/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.809 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:26.181 [226/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.181 [227/265] Linking target lib/librte_eal.so.24.0 00:02:26.438 [228/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.438 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:26.438 [230/265] Linking target lib/librte_ring.so.24.0 00:02:26.438 [231/265] Linking target lib/librte_meter.so.24.0 00:02:26.438 [232/265] Linking target lib/librte_pci.so.24.0 00:02:26.438 [233/265] Linking target lib/librte_timer.so.24.0 00:02:26.438 [234/265] Linking target lib/librte_dmadev.so.24.0 00:02:26.438 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:26.695 [236/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:26.695 [237/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:26.695 [238/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:26.695 [239/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:26.695 [240/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:26.695 [241/265] Linking target lib/librte_rcu.so.24.0 00:02:26.695 [242/265] Linking target lib/librte_mempool.so.24.0 00:02:26.695 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:26.695 [244/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:26.695 [245/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:26.695 [246/265] Linking target lib/librte_mbuf.so.24.0 00:02:26.695 [247/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:26.953 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:26.953 [249/265] Linking target lib/librte_compressdev.so.24.0 00:02:26.953 [250/265] Linking target lib/librte_reorder.so.24.0 00:02:26.953 [251/265] Linking target lib/librte_net.so.24.0 00:02:26.953 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:02:27.233 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:27.233 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:27.233 [255/265] Linking target lib/librte_hash.so.24.0 00:02:27.233 [256/265] Linking target lib/librte_security.so.24.0 00:02:27.233 [257/265] Linking target lib/librte_cmdline.so.24.0 00:02:27.233 [258/265] Linking target lib/librte_ethdev.so.24.0 00:02:27.491 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:27.491 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:27.491 [261/265] Linking target lib/librte_power.so.24.0 00:02:31.669 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:31.669 [263/265] Linking static target lib/librte_vhost.a 00:02:32.601 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.601 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:32.601 INFO: autodetecting backend as ninja 00:02:32.601 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:34.496 CC lib/ut_mock/mock.o 00:02:34.496 CC lib/log/log.o 00:02:34.496 CC lib/log/log_flags.o 00:02:34.496 CC lib/log/log_deprecated.o 00:02:34.496 CC lib/ut/ut.o 00:02:34.496 LIB libspdk_ut_mock.a 00:02:34.496 LIB libspdk_ut.a 00:02:34.496 SO libspdk_ut_mock.so.6.0 00:02:34.496 LIB libspdk_log.a 00:02:34.496 SO libspdk_ut.so.2.0 00:02:34.496 SO libspdk_log.so.7.0 00:02:34.496 SYMLINK libspdk_ut_mock.so 00:02:34.496 SYMLINK libspdk_ut.so 00:02:34.496 SYMLINK libspdk_log.so 00:02:34.754 CC lib/util/base64.o 00:02:34.754 CC lib/dma/dma.o 00:02:34.754 CC lib/util/bit_array.o 00:02:34.754 CC lib/util/cpuset.o 00:02:34.754 CC lib/util/crc16.o 00:02:34.754 CC lib/util/crc32.o 00:02:34.754 CC lib/util/crc32c.o 00:02:34.754 CC lib/util/crc32_ieee.o 00:02:34.754 CC lib/util/crc64.o 00:02:34.754 CXX lib/trace_parser/trace.o 00:02:34.754 CC lib/util/dif.o 00:02:34.754 CC lib/ioat/ioat.o 00:02:34.754 CC lib/util/fd.o 00:02:34.754 CC lib/util/file.o 00:02:34.754 CC lib/util/hexlify.o 00:02:34.754 CC lib/util/iov.o 00:02:34.754 CC lib/util/math.o 00:02:34.754 CC lib/util/pipe.o 00:02:34.754 CC lib/util/strerror_tls.o 00:02:34.754 CC lib/util/string.o 00:02:34.754 CC lib/util/fd_group.o 00:02:34.754 CC lib/util/uuid.o 00:02:34.754 CC lib/util/xor.o 00:02:34.754 CC lib/util/zipf.o 00:02:34.754 CC lib/vfio_user/host/vfio_user_pci.o 00:02:34.754 CC lib/vfio_user/host/vfio_user.o 00:02:35.011 LIB libspdk_dma.a 00:02:35.011 SO libspdk_dma.so.4.0 00:02:35.011 SYMLINK libspdk_dma.so 00:02:35.011 LIB libspdk_ioat.a 00:02:35.011 SO libspdk_ioat.so.7.0 00:02:35.011 LIB libspdk_vfio_user.a 00:02:35.011 SYMLINK libspdk_ioat.so 00:02:35.011 SO libspdk_vfio_user.so.5.0 00:02:35.269 SYMLINK libspdk_vfio_user.so 00:02:35.269 LIB libspdk_util.a 00:02:35.269 SO libspdk_util.so.9.0 00:02:35.527 SYMLINK libspdk_util.so 00:02:35.785 CC lib/conf/conf.o 00:02:35.785 CC lib/rdma/common.o 00:02:35.785 CC lib/json/json_parse.o 00:02:35.785 CC lib/rdma/rdma_verbs.o 00:02:35.785 CC lib/idxd/idxd.o 00:02:35.785 CC lib/json/json_util.o 00:02:35.785 CC lib/idxd/idxd_user.o 00:02:35.785 CC lib/env_dpdk/env.o 00:02:35.785 CC lib/json/json_write.o 00:02:35.785 CC lib/env_dpdk/memory.o 00:02:35.785 CC lib/env_dpdk/pci.o 00:02:35.785 CC lib/env_dpdk/init.o 00:02:35.785 CC lib/env_dpdk/threads.o 00:02:35.785 CC lib/vmd/vmd.o 00:02:35.785 CC lib/env_dpdk/pci_ioat.o 00:02:35.785 CC lib/env_dpdk/pci_virtio.o 00:02:35.785 CC lib/vmd/led.o 00:02:35.785 CC lib/env_dpdk/pci_vmd.o 00:02:35.785 CC lib/env_dpdk/pci_idxd.o 00:02:35.785 CC lib/env_dpdk/pci_event.o 00:02:35.785 CC lib/env_dpdk/sigbus_handler.o 00:02:35.785 CC lib/env_dpdk/pci_dpdk.o 00:02:35.785 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:35.785 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:35.785 LIB libspdk_trace_parser.a 00:02:35.785 SO libspdk_trace_parser.so.5.0 00:02:35.785 SYMLINK libspdk_trace_parser.so 00:02:36.042 LIB libspdk_conf.a 00:02:36.042 SO libspdk_conf.so.6.0 00:02:36.042 LIB libspdk_rdma.a 00:02:36.042 SYMLINK libspdk_conf.so 00:02:36.042 SO libspdk_rdma.so.6.0 00:02:36.042 LIB libspdk_json.a 00:02:36.042 SO libspdk_json.so.6.0 00:02:36.042 SYMLINK libspdk_rdma.so 00:02:36.300 SYMLINK libspdk_json.so 00:02:36.300 LIB libspdk_idxd.a 00:02:36.300 SO libspdk_idxd.so.12.0 00:02:36.300 CC lib/jsonrpc/jsonrpc_server.o 00:02:36.300 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:36.300 CC lib/jsonrpc/jsonrpc_client.o 00:02:36.300 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:36.300 SYMLINK libspdk_idxd.so 00:02:36.300 LIB libspdk_vmd.a 00:02:36.300 SO libspdk_vmd.so.6.0 00:02:36.558 SYMLINK libspdk_vmd.so 00:02:36.558 LIB libspdk_jsonrpc.a 00:02:36.558 SO libspdk_jsonrpc.so.6.0 00:02:36.815 SYMLINK libspdk_jsonrpc.so 00:02:37.073 CC lib/rpc/rpc.o 00:02:37.331 LIB libspdk_rpc.a 00:02:37.331 SO libspdk_rpc.so.6.0 00:02:37.331 SYMLINK libspdk_rpc.so 00:02:37.588 CC lib/trace/trace.o 00:02:37.588 CC lib/notify/notify.o 00:02:37.588 CC lib/notify/notify_rpc.o 00:02:37.588 CC lib/trace/trace_flags.o 00:02:37.588 CC lib/trace/trace_rpc.o 00:02:37.588 CC lib/keyring/keyring.o 00:02:37.588 CC lib/keyring/keyring_rpc.o 00:02:37.846 LIB libspdk_notify.a 00:02:37.846 SO libspdk_notify.so.6.0 00:02:37.846 LIB libspdk_trace.a 00:02:37.846 LIB libspdk_keyring.a 00:02:37.846 SYMLINK libspdk_notify.so 00:02:37.846 SO libspdk_trace.so.10.0 00:02:37.846 SO libspdk_keyring.so.1.0 00:02:37.846 LIB libspdk_env_dpdk.a 00:02:37.846 SYMLINK libspdk_trace.so 00:02:37.846 SYMLINK libspdk_keyring.so 00:02:37.846 SO libspdk_env_dpdk.so.14.0 00:02:38.104 CC lib/sock/sock.o 00:02:38.104 CC lib/sock/sock_rpc.o 00:02:38.104 CC lib/thread/thread.o 00:02:38.104 CC lib/thread/iobuf.o 00:02:38.104 SYMLINK libspdk_env_dpdk.so 00:02:38.669 LIB libspdk_sock.a 00:02:38.669 SO libspdk_sock.so.9.0 00:02:38.669 SYMLINK libspdk_sock.so 00:02:38.927 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:38.927 CC lib/nvme/nvme_ctrlr.o 00:02:38.927 CC lib/nvme/nvme_fabric.o 00:02:38.927 CC lib/nvme/nvme_ns_cmd.o 00:02:38.927 CC lib/nvme/nvme_ns.o 00:02:38.927 CC lib/nvme/nvme_pcie_common.o 00:02:38.927 CC lib/nvme/nvme_pcie.o 00:02:38.927 CC lib/nvme/nvme_qpair.o 00:02:38.927 CC lib/nvme/nvme.o 00:02:38.927 CC lib/nvme/nvme_quirks.o 00:02:38.927 CC lib/nvme/nvme_transport.o 00:02:38.927 CC lib/nvme/nvme_discovery.o 00:02:38.927 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:38.927 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:38.927 CC lib/nvme/nvme_tcp.o 00:02:38.927 CC lib/nvme/nvme_opal.o 00:02:38.927 CC lib/nvme/nvme_io_msg.o 00:02:38.927 CC lib/nvme/nvme_poll_group.o 00:02:38.927 CC lib/nvme/nvme_zns.o 00:02:38.927 CC lib/nvme/nvme_stubs.o 00:02:38.927 CC lib/nvme/nvme_auth.o 00:02:38.927 CC lib/nvme/nvme_cuse.o 00:02:38.927 CC lib/nvme/nvme_vfio_user.o 00:02:38.927 CC lib/nvme/nvme_rdma.o 00:02:39.862 LIB libspdk_thread.a 00:02:39.862 SO libspdk_thread.so.10.0 00:02:40.120 SYMLINK libspdk_thread.so 00:02:40.120 CC lib/blob/blobstore.o 00:02:40.120 CC lib/virtio/virtio.o 00:02:40.120 CC lib/accel/accel.o 00:02:40.120 CC lib/blob/request.o 00:02:40.120 CC lib/vfu_tgt/tgt_endpoint.o 00:02:40.120 CC lib/init/json_config.o 00:02:40.120 CC lib/virtio/virtio_vhost_user.o 00:02:40.120 CC lib/accel/accel_rpc.o 00:02:40.120 CC lib/blob/zeroes.o 00:02:40.120 CC lib/init/subsystem.o 00:02:40.120 CC lib/virtio/virtio_vfio_user.o 00:02:40.120 CC lib/vfu_tgt/tgt_rpc.o 00:02:40.120 CC lib/accel/accel_sw.o 00:02:40.120 CC lib/virtio/virtio_pci.o 00:02:40.120 CC lib/blob/blob_bs_dev.o 00:02:40.120 CC lib/init/subsystem_rpc.o 00:02:40.120 CC lib/init/rpc.o 00:02:40.379 LIB libspdk_init.a 00:02:40.379 SO libspdk_init.so.5.0 00:02:40.636 LIB libspdk_vfu_tgt.a 00:02:40.636 LIB libspdk_virtio.a 00:02:40.636 SO libspdk_vfu_tgt.so.3.0 00:02:40.636 SO libspdk_virtio.so.7.0 00:02:40.636 SYMLINK libspdk_init.so 00:02:40.636 SYMLINK libspdk_vfu_tgt.so 00:02:40.636 SYMLINK libspdk_virtio.so 00:02:40.636 CC lib/event/app.o 00:02:40.636 CC lib/event/reactor.o 00:02:40.636 CC lib/event/log_rpc.o 00:02:40.636 CC lib/event/app_rpc.o 00:02:40.636 CC lib/event/scheduler_static.o 00:02:41.202 LIB libspdk_event.a 00:02:41.202 SO libspdk_event.so.13.0 00:02:41.202 LIB libspdk_accel.a 00:02:41.202 SO libspdk_accel.so.15.0 00:02:41.202 SYMLINK libspdk_event.so 00:02:41.202 SYMLINK libspdk_accel.so 00:02:41.460 CC lib/bdev/bdev.o 00:02:41.460 CC lib/bdev/bdev_rpc.o 00:02:41.460 CC lib/bdev/bdev_zone.o 00:02:41.460 CC lib/bdev/part.o 00:02:41.460 CC lib/bdev/scsi_nvme.o 00:02:41.718 LIB libspdk_nvme.a 00:02:41.718 SO libspdk_nvme.so.13.0 00:02:42.284 SYMLINK libspdk_nvme.so 00:02:43.687 LIB libspdk_blob.a 00:02:43.687 SO libspdk_blob.so.11.0 00:02:43.965 SYMLINK libspdk_blob.so 00:02:43.965 CC lib/blobfs/blobfs.o 00:02:43.965 CC lib/blobfs/tree.o 00:02:43.965 CC lib/lvol/lvol.o 00:02:44.531 LIB libspdk_bdev.a 00:02:44.531 SO libspdk_bdev.so.15.0 00:02:44.531 SYMLINK libspdk_bdev.so 00:02:44.796 CC lib/nbd/nbd.o 00:02:44.796 CC lib/nvmf/ctrlr.o 00:02:44.796 CC lib/scsi/dev.o 00:02:44.796 CC lib/nbd/nbd_rpc.o 00:02:44.796 CC lib/nvmf/ctrlr_discovery.o 00:02:44.796 CC lib/scsi/lun.o 00:02:44.796 CC lib/nvmf/ctrlr_bdev.o 00:02:44.796 CC lib/scsi/port.o 00:02:44.796 CC lib/scsi/scsi.o 00:02:44.796 CC lib/nvmf/subsystem.o 00:02:44.796 CC lib/ublk/ublk.o 00:02:44.796 CC lib/nvmf/nvmf.o 00:02:44.796 CC lib/ublk/ublk_rpc.o 00:02:44.796 CC lib/ftl/ftl_core.o 00:02:44.796 CC lib/scsi/scsi_bdev.o 00:02:44.796 CC lib/nvmf/nvmf_rpc.o 00:02:44.796 CC lib/ftl/ftl_init.o 00:02:44.796 CC lib/nvmf/transport.o 00:02:44.796 CC lib/scsi/scsi_pr.o 00:02:44.796 CC lib/scsi/scsi_rpc.o 00:02:44.796 CC lib/ftl/ftl_layout.o 00:02:44.796 CC lib/ftl/ftl_debug.o 00:02:44.796 CC lib/nvmf/tcp.o 00:02:44.796 CC lib/scsi/task.o 00:02:44.796 CC lib/ftl/ftl_io.o 00:02:44.796 CC lib/nvmf/vfio_user.o 00:02:44.796 CC lib/ftl/ftl_sb.o 00:02:44.796 CC lib/ftl/ftl_l2p.o 00:02:44.796 CC lib/ftl/ftl_l2p_flat.o 00:02:44.796 CC lib/nvmf/rdma.o 00:02:44.796 CC lib/ftl/ftl_nv_cache.o 00:02:44.796 CC lib/ftl/ftl_band.o 00:02:44.796 CC lib/ftl/ftl_band_ops.o 00:02:44.796 CC lib/ftl/ftl_writer.o 00:02:44.796 CC lib/ftl/ftl_rq.o 00:02:44.796 CC lib/ftl/ftl_reloc.o 00:02:44.796 CC lib/ftl/ftl_l2p_cache.o 00:02:44.796 CC lib/ftl/ftl_p2l.o 00:02:44.796 CC lib/ftl/mngt/ftl_mngt.o 00:02:44.796 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:44.796 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:44.796 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:44.796 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:44.796 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:44.796 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:44.796 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:45.057 LIB libspdk_lvol.a 00:02:45.057 SO libspdk_lvol.so.10.0 00:02:45.057 LIB libspdk_blobfs.a 00:02:45.057 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:45.057 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:45.057 SO libspdk_blobfs.so.10.0 00:02:45.057 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:45.057 SYMLINK libspdk_lvol.so 00:02:45.057 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:45.057 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:45.316 CC lib/ftl/utils/ftl_conf.o 00:02:45.316 CC lib/ftl/utils/ftl_md.o 00:02:45.316 CC lib/ftl/utils/ftl_mempool.o 00:02:45.316 CC lib/ftl/utils/ftl_bitmap.o 00:02:45.316 CC lib/ftl/utils/ftl_property.o 00:02:45.316 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:45.316 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:45.316 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:45.316 SYMLINK libspdk_blobfs.so 00:02:45.316 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:45.316 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:45.316 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:45.316 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:45.316 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:45.316 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:45.316 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:45.316 CC lib/ftl/base/ftl_base_dev.o 00:02:45.316 CC lib/ftl/base/ftl_base_bdev.o 00:02:45.316 CC lib/ftl/ftl_trace.o 00:02:45.574 LIB libspdk_nbd.a 00:02:45.574 SO libspdk_nbd.so.7.0 00:02:45.832 SYMLINK libspdk_nbd.so 00:02:45.832 LIB libspdk_scsi.a 00:02:45.832 SO libspdk_scsi.so.9.0 00:02:45.832 SYMLINK libspdk_scsi.so 00:02:45.832 LIB libspdk_ublk.a 00:02:45.832 SO libspdk_ublk.so.3.0 00:02:46.089 SYMLINK libspdk_ublk.so 00:02:46.089 CC lib/vhost/vhost.o 00:02:46.089 CC lib/iscsi/conn.o 00:02:46.089 CC lib/vhost/vhost_rpc.o 00:02:46.089 CC lib/iscsi/init_grp.o 00:02:46.089 CC lib/vhost/vhost_scsi.o 00:02:46.089 CC lib/iscsi/iscsi.o 00:02:46.089 CC lib/vhost/vhost_blk.o 00:02:46.089 CC lib/iscsi/md5.o 00:02:46.089 CC lib/vhost/rte_vhost_user.o 00:02:46.089 CC lib/iscsi/param.o 00:02:46.089 CC lib/iscsi/portal_grp.o 00:02:46.089 CC lib/iscsi/tgt_node.o 00:02:46.089 CC lib/iscsi/iscsi_subsystem.o 00:02:46.089 CC lib/iscsi/iscsi_rpc.o 00:02:46.089 CC lib/iscsi/task.o 00:02:46.089 LIB libspdk_ftl.a 00:02:46.346 SO libspdk_ftl.so.9.0 00:02:46.604 SYMLINK libspdk_ftl.so 00:02:47.536 LIB libspdk_vhost.a 00:02:47.536 LIB libspdk_nvmf.a 00:02:47.536 SO libspdk_vhost.so.8.0 00:02:47.536 SO libspdk_nvmf.so.18.0 00:02:47.536 SYMLINK libspdk_vhost.so 00:02:47.794 SYMLINK libspdk_nvmf.so 00:02:47.794 LIB libspdk_iscsi.a 00:02:47.794 SO libspdk_iscsi.so.8.0 00:02:48.051 SYMLINK libspdk_iscsi.so 00:02:48.309 CC module/env_dpdk/env_dpdk_rpc.o 00:02:48.309 CC module/vfu_device/vfu_virtio.o 00:02:48.309 CC module/vfu_device/vfu_virtio_blk.o 00:02:48.309 CC module/vfu_device/vfu_virtio_scsi.o 00:02:48.309 CC module/vfu_device/vfu_virtio_rpc.o 00:02:48.567 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:48.567 CC module/accel/ioat/accel_ioat.o 00:02:48.567 CC module/accel/ioat/accel_ioat_rpc.o 00:02:48.567 CC module/scheduler/gscheduler/gscheduler.o 00:02:48.567 CC module/accel/iaa/accel_iaa.o 00:02:48.567 CC module/blob/bdev/blob_bdev.o 00:02:48.567 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:48.567 CC module/accel/error/accel_error.o 00:02:48.567 CC module/accel/iaa/accel_iaa_rpc.o 00:02:48.567 CC module/accel/error/accel_error_rpc.o 00:02:48.567 CC module/keyring/file/keyring.o 00:02:48.567 CC module/keyring/file/keyring_rpc.o 00:02:48.567 CC module/accel/dsa/accel_dsa.o 00:02:48.567 CC module/accel/dsa/accel_dsa_rpc.o 00:02:48.567 CC module/sock/posix/posix.o 00:02:48.567 LIB libspdk_env_dpdk_rpc.a 00:02:48.567 SO libspdk_env_dpdk_rpc.so.6.0 00:02:48.567 SYMLINK libspdk_env_dpdk_rpc.so 00:02:48.567 LIB libspdk_keyring_file.a 00:02:48.567 LIB libspdk_scheduler_gscheduler.a 00:02:48.567 LIB libspdk_scheduler_dpdk_governor.a 00:02:48.567 LIB libspdk_accel_error.a 00:02:48.567 LIB libspdk_scheduler_dynamic.a 00:02:48.567 SO libspdk_scheduler_gscheduler.so.4.0 00:02:48.567 SO libspdk_keyring_file.so.1.0 00:02:48.567 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:48.567 SO libspdk_accel_error.so.2.0 00:02:48.567 LIB libspdk_accel_ioat.a 00:02:48.567 SO libspdk_scheduler_dynamic.so.4.0 00:02:48.825 LIB libspdk_accel_iaa.a 00:02:48.825 LIB libspdk_accel_dsa.a 00:02:48.825 SYMLINK libspdk_scheduler_gscheduler.so 00:02:48.825 SO libspdk_accel_ioat.so.6.0 00:02:48.825 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:48.825 SYMLINK libspdk_keyring_file.so 00:02:48.825 SO libspdk_accel_iaa.so.3.0 00:02:48.825 SYMLINK libspdk_scheduler_dynamic.so 00:02:48.825 LIB libspdk_blob_bdev.a 00:02:48.825 SYMLINK libspdk_accel_error.so 00:02:48.825 SO libspdk_accel_dsa.so.5.0 00:02:48.825 SO libspdk_blob_bdev.so.11.0 00:02:48.825 SYMLINK libspdk_accel_ioat.so 00:02:48.825 SYMLINK libspdk_accel_dsa.so 00:02:48.825 SYMLINK libspdk_accel_iaa.so 00:02:48.825 SYMLINK libspdk_blob_bdev.so 00:02:49.086 LIB libspdk_vfu_device.a 00:02:49.086 SO libspdk_vfu_device.so.3.0 00:02:49.086 CC module/bdev/delay/vbdev_delay.o 00:02:49.086 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:49.086 CC module/bdev/malloc/bdev_malloc.o 00:02:49.086 CC module/bdev/lvol/vbdev_lvol.o 00:02:49.086 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:49.086 CC module/bdev/aio/bdev_aio.o 00:02:49.086 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:49.086 CC module/bdev/aio/bdev_aio_rpc.o 00:02:49.086 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:49.086 CC module/bdev/error/vbdev_error.o 00:02:49.086 CC module/bdev/error/vbdev_error_rpc.o 00:02:49.086 CC module/bdev/iscsi/bdev_iscsi.o 00:02:49.086 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:49.086 CC module/blobfs/bdev/blobfs_bdev.o 00:02:49.086 CC module/bdev/split/vbdev_split.o 00:02:49.086 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:49.086 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:49.086 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:49.086 CC module/bdev/null/bdev_null.o 00:02:49.086 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:49.086 CC module/bdev/raid/bdev_raid.o 00:02:49.086 CC module/bdev/split/vbdev_split_rpc.o 00:02:49.086 CC module/bdev/nvme/bdev_nvme.o 00:02:49.086 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:49.086 CC module/bdev/ftl/bdev_ftl.o 00:02:49.086 CC module/bdev/raid/bdev_raid_rpc.o 00:02:49.086 CC module/bdev/null/bdev_null_rpc.o 00:02:49.086 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:49.086 CC module/bdev/passthru/vbdev_passthru.o 00:02:49.086 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:49.086 CC module/bdev/raid/bdev_raid_sb.o 00:02:49.086 CC module/bdev/nvme/nvme_rpc.o 00:02:49.086 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:49.086 CC module/bdev/gpt/gpt.o 00:02:49.086 CC module/bdev/gpt/vbdev_gpt.o 00:02:49.086 CC module/bdev/raid/raid0.o 00:02:49.086 CC module/bdev/nvme/bdev_mdns_client.o 00:02:49.086 CC module/bdev/raid/raid1.o 00:02:49.086 CC module/bdev/nvme/vbdev_opal.o 00:02:49.086 CC module/bdev/raid/concat.o 00:02:49.086 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:49.086 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:49.086 SYMLINK libspdk_vfu_device.so 00:02:49.344 LIB libspdk_sock_posix.a 00:02:49.344 SO libspdk_sock_posix.so.6.0 00:02:49.344 SYMLINK libspdk_sock_posix.so 00:02:49.603 LIB libspdk_blobfs_bdev.a 00:02:49.603 LIB libspdk_bdev_split.a 00:02:49.603 SO libspdk_blobfs_bdev.so.6.0 00:02:49.603 LIB libspdk_bdev_zone_block.a 00:02:49.603 SO libspdk_bdev_split.so.6.0 00:02:49.603 SO libspdk_bdev_zone_block.so.6.0 00:02:49.603 LIB libspdk_bdev_null.a 00:02:49.603 SYMLINK libspdk_blobfs_bdev.so 00:02:49.603 SO libspdk_bdev_null.so.6.0 00:02:49.603 SYMLINK libspdk_bdev_split.so 00:02:49.603 SYMLINK libspdk_bdev_zone_block.so 00:02:49.603 LIB libspdk_bdev_gpt.a 00:02:49.603 LIB libspdk_bdev_malloc.a 00:02:49.603 SO libspdk_bdev_gpt.so.6.0 00:02:49.603 LIB libspdk_bdev_error.a 00:02:49.603 SO libspdk_bdev_malloc.so.6.0 00:02:49.603 SYMLINK libspdk_bdev_null.so 00:02:49.603 LIB libspdk_bdev_ftl.a 00:02:49.603 LIB libspdk_bdev_iscsi.a 00:02:49.603 LIB libspdk_bdev_passthru.a 00:02:49.603 SO libspdk_bdev_error.so.6.0 00:02:49.603 SO libspdk_bdev_passthru.so.6.0 00:02:49.603 SO libspdk_bdev_ftl.so.6.0 00:02:49.603 SO libspdk_bdev_iscsi.so.6.0 00:02:49.603 SYMLINK libspdk_bdev_gpt.so 00:02:49.603 LIB libspdk_bdev_aio.a 00:02:49.603 SYMLINK libspdk_bdev_malloc.so 00:02:49.861 LIB libspdk_bdev_delay.a 00:02:49.861 SYMLINK libspdk_bdev_error.so 00:02:49.861 SO libspdk_bdev_aio.so.6.0 00:02:49.861 SYMLINK libspdk_bdev_iscsi.so 00:02:49.861 SYMLINK libspdk_bdev_passthru.so 00:02:49.861 SYMLINK libspdk_bdev_ftl.so 00:02:49.861 SO libspdk_bdev_delay.so.6.0 00:02:49.861 SYMLINK libspdk_bdev_aio.so 00:02:49.861 LIB libspdk_bdev_lvol.a 00:02:49.861 SYMLINK libspdk_bdev_delay.so 00:02:49.861 SO libspdk_bdev_lvol.so.6.0 00:02:49.861 LIB libspdk_bdev_virtio.a 00:02:49.861 SO libspdk_bdev_virtio.so.6.0 00:02:49.861 SYMLINK libspdk_bdev_lvol.so 00:02:50.128 SYMLINK libspdk_bdev_virtio.so 00:02:50.128 LIB libspdk_bdev_raid.a 00:02:50.128 SO libspdk_bdev_raid.so.6.0 00:02:50.389 SYMLINK libspdk_bdev_raid.so 00:02:52.304 LIB libspdk_bdev_nvme.a 00:02:52.304 SO libspdk_bdev_nvme.so.7.0 00:02:52.304 SYMLINK libspdk_bdev_nvme.so 00:02:52.563 CC module/event/subsystems/vmd/vmd.o 00:02:52.563 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:52.563 CC module/event/subsystems/scheduler/scheduler.o 00:02:52.563 CC module/event/subsystems/keyring/keyring.o 00:02:52.563 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:52.563 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:52.563 CC module/event/subsystems/sock/sock.o 00:02:52.563 CC module/event/subsystems/iobuf/iobuf.o 00:02:52.563 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:52.821 LIB libspdk_event_keyring.a 00:02:52.821 LIB libspdk_event_vhost_blk.a 00:02:52.821 LIB libspdk_event_scheduler.a 00:02:52.821 LIB libspdk_event_iobuf.a 00:02:52.821 SO libspdk_event_keyring.so.1.0 00:02:52.821 SO libspdk_event_vhost_blk.so.3.0 00:02:52.821 SO libspdk_event_scheduler.so.4.0 00:02:52.821 LIB libspdk_event_sock.a 00:02:52.821 SO libspdk_event_iobuf.so.3.0 00:02:52.821 LIB libspdk_event_vfu_tgt.a 00:02:52.821 LIB libspdk_event_vmd.a 00:02:52.821 SYMLINK libspdk_event_keyring.so 00:02:52.821 SO libspdk_event_sock.so.5.0 00:02:52.821 SYMLINK libspdk_event_vhost_blk.so 00:02:52.821 SYMLINK libspdk_event_scheduler.so 00:02:52.821 SO libspdk_event_vfu_tgt.so.3.0 00:02:52.821 SO libspdk_event_vmd.so.6.0 00:02:52.821 SYMLINK libspdk_event_iobuf.so 00:02:52.821 SYMLINK libspdk_event_sock.so 00:02:52.821 SYMLINK libspdk_event_vfu_tgt.so 00:02:52.821 SYMLINK libspdk_event_vmd.so 00:02:53.079 CC module/event/subsystems/accel/accel.o 00:02:53.337 LIB libspdk_event_accel.a 00:02:53.337 SO libspdk_event_accel.so.6.0 00:02:53.595 SYMLINK libspdk_event_accel.so 00:02:53.595 CC module/event/subsystems/bdev/bdev.o 00:02:53.852 LIB libspdk_event_bdev.a 00:02:53.852 SO libspdk_event_bdev.so.6.0 00:02:54.110 SYMLINK libspdk_event_bdev.so 00:02:54.110 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:54.110 CC module/event/subsystems/nbd/nbd.o 00:02:54.110 CC module/event/subsystems/ublk/ublk.o 00:02:54.110 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:54.110 CC module/event/subsystems/scsi/scsi.o 00:02:54.368 LIB libspdk_event_nbd.a 00:02:54.368 LIB libspdk_event_scsi.a 00:02:54.368 SO libspdk_event_nbd.so.6.0 00:02:54.368 LIB libspdk_event_ublk.a 00:02:54.368 SO libspdk_event_scsi.so.6.0 00:02:54.368 SO libspdk_event_ublk.so.3.0 00:02:54.368 SYMLINK libspdk_event_nbd.so 00:02:54.368 LIB libspdk_event_nvmf.a 00:02:54.368 SYMLINK libspdk_event_scsi.so 00:02:54.368 SYMLINK libspdk_event_ublk.so 00:02:54.625 SO libspdk_event_nvmf.so.6.0 00:02:54.625 SYMLINK libspdk_event_nvmf.so 00:02:54.625 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:54.625 CC module/event/subsystems/iscsi/iscsi.o 00:02:54.883 LIB libspdk_event_vhost_scsi.a 00:02:54.883 SO libspdk_event_vhost_scsi.so.3.0 00:02:54.883 SYMLINK libspdk_event_vhost_scsi.so 00:02:54.883 LIB libspdk_event_iscsi.a 00:02:54.883 SO libspdk_event_iscsi.so.6.0 00:02:55.141 SYMLINK libspdk_event_iscsi.so 00:02:55.141 SO libspdk.so.6.0 00:02:55.141 SYMLINK libspdk.so 00:02:55.402 CC app/trace_record/trace_record.o 00:02:55.402 CXX app/trace/trace.o 00:02:55.402 CC test/rpc_client/rpc_client_test.o 00:02:55.402 CC app/spdk_lspci/spdk_lspci.o 00:02:55.402 CC app/spdk_nvme_perf/perf.o 00:02:55.402 CC app/spdk_nvme_discover/discovery_aer.o 00:02:55.402 CC app/spdk_top/spdk_top.o 00:02:55.402 CC app/spdk_nvme_identify/identify.o 00:02:55.402 TEST_HEADER include/spdk/accel.h 00:02:55.402 TEST_HEADER include/spdk/accel_module.h 00:02:55.402 TEST_HEADER include/spdk/assert.h 00:02:55.402 TEST_HEADER include/spdk/barrier.h 00:02:55.402 TEST_HEADER include/spdk/base64.h 00:02:55.402 TEST_HEADER include/spdk/bdev.h 00:02:55.402 TEST_HEADER include/spdk/bdev_module.h 00:02:55.402 TEST_HEADER include/spdk/bdev_zone.h 00:02:55.402 TEST_HEADER include/spdk/bit_array.h 00:02:55.402 TEST_HEADER include/spdk/bit_pool.h 00:02:55.402 TEST_HEADER include/spdk/blob_bdev.h 00:02:55.402 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:55.402 TEST_HEADER include/spdk/blobfs.h 00:02:55.402 TEST_HEADER include/spdk/blob.h 00:02:55.402 TEST_HEADER include/spdk/conf.h 00:02:55.402 CC app/spdk_dd/spdk_dd.o 00:02:55.402 TEST_HEADER include/spdk/config.h 00:02:55.402 CC app/iscsi_tgt/iscsi_tgt.o 00:02:55.402 TEST_HEADER include/spdk/cpuset.h 00:02:55.402 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:55.402 TEST_HEADER include/spdk/crc16.h 00:02:55.402 TEST_HEADER include/spdk/crc32.h 00:02:55.402 CC app/nvmf_tgt/nvmf_main.o 00:02:55.402 TEST_HEADER include/spdk/crc64.h 00:02:55.664 TEST_HEADER include/spdk/dif.h 00:02:55.664 TEST_HEADER include/spdk/dma.h 00:02:55.664 TEST_HEADER include/spdk/endian.h 00:02:55.664 TEST_HEADER include/spdk/env_dpdk.h 00:02:55.664 CC app/vhost/vhost.o 00:02:55.664 TEST_HEADER include/spdk/env.h 00:02:55.664 TEST_HEADER include/spdk/event.h 00:02:55.664 TEST_HEADER include/spdk/fd_group.h 00:02:55.664 TEST_HEADER include/spdk/fd.h 00:02:55.664 TEST_HEADER include/spdk/file.h 00:02:55.664 TEST_HEADER include/spdk/ftl.h 00:02:55.664 CC app/spdk_tgt/spdk_tgt.o 00:02:55.664 TEST_HEADER include/spdk/gpt_spec.h 00:02:55.664 TEST_HEADER include/spdk/hexlify.h 00:02:55.664 CC test/event/reactor/reactor.o 00:02:55.664 TEST_HEADER include/spdk/histogram_data.h 00:02:55.664 CC test/event/event_perf/event_perf.o 00:02:55.664 TEST_HEADER include/spdk/idxd.h 00:02:55.664 CC test/nvme/reset/reset.o 00:02:55.664 TEST_HEADER include/spdk/idxd_spec.h 00:02:55.664 CC test/app/stub/stub.o 00:02:55.664 CC test/nvme/aer/aer.o 00:02:55.664 TEST_HEADER include/spdk/init.h 00:02:55.664 CC test/app/histogram_perf/histogram_perf.o 00:02:55.664 CC test/app/jsoncat/jsoncat.o 00:02:55.664 CC test/nvme/e2edp/nvme_dp.o 00:02:55.664 TEST_HEADER include/spdk/ioat.h 00:02:55.664 CC test/event/reactor_perf/reactor_perf.o 00:02:55.664 TEST_HEADER include/spdk/ioat_spec.h 00:02:55.664 CC app/fio/nvme/fio_plugin.o 00:02:55.664 CC test/thread/poller_perf/poller_perf.o 00:02:55.664 CC test/nvme/sgl/sgl.o 00:02:55.664 CC examples/util/zipf/zipf.o 00:02:55.664 TEST_HEADER include/spdk/iscsi_spec.h 00:02:55.664 CC examples/sock/hello_world/hello_sock.o 00:02:55.664 CC examples/accel/perf/accel_perf.o 00:02:55.664 CC examples/ioat/perf/perf.o 00:02:55.664 TEST_HEADER include/spdk/json.h 00:02:55.664 CC examples/vmd/lsvmd/lsvmd.o 00:02:55.664 CC examples/idxd/perf/perf.o 00:02:55.664 CC examples/nvme/hello_world/hello_world.o 00:02:55.664 TEST_HEADER include/spdk/jsonrpc.h 00:02:55.664 TEST_HEADER include/spdk/keyring.h 00:02:55.664 CC test/event/app_repeat/app_repeat.o 00:02:55.664 TEST_HEADER include/spdk/keyring_module.h 00:02:55.664 TEST_HEADER include/spdk/likely.h 00:02:55.664 TEST_HEADER include/spdk/log.h 00:02:55.664 TEST_HEADER include/spdk/lvol.h 00:02:55.664 TEST_HEADER include/spdk/memory.h 00:02:55.664 TEST_HEADER include/spdk/mmio.h 00:02:55.664 TEST_HEADER include/spdk/nbd.h 00:02:55.664 TEST_HEADER include/spdk/notify.h 00:02:55.664 CC test/accel/dif/dif.o 00:02:55.664 TEST_HEADER include/spdk/nvme.h 00:02:55.664 TEST_HEADER include/spdk/nvme_intel.h 00:02:55.664 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:55.664 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:55.664 TEST_HEADER include/spdk/nvme_spec.h 00:02:55.664 CC test/app/bdev_svc/bdev_svc.o 00:02:55.664 TEST_HEADER include/spdk/nvme_zns.h 00:02:55.664 CC test/blobfs/mkfs/mkfs.o 00:02:55.664 CC test/event/scheduler/scheduler.o 00:02:55.664 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:55.664 CC test/dma/test_dma/test_dma.o 00:02:55.664 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:55.664 CC examples/bdev/hello_world/hello_bdev.o 00:02:55.664 TEST_HEADER include/spdk/nvmf.h 00:02:55.664 CC examples/thread/thread/thread_ex.o 00:02:55.664 CC test/bdev/bdevio/bdevio.o 00:02:55.664 TEST_HEADER include/spdk/nvmf_spec.h 00:02:55.664 CC examples/nvmf/nvmf/nvmf.o 00:02:55.664 CC examples/blob/hello_world/hello_blob.o 00:02:55.664 TEST_HEADER include/spdk/nvmf_transport.h 00:02:55.664 TEST_HEADER include/spdk/opal.h 00:02:55.664 TEST_HEADER include/spdk/opal_spec.h 00:02:55.664 TEST_HEADER include/spdk/pci_ids.h 00:02:55.664 TEST_HEADER include/spdk/pipe.h 00:02:55.664 TEST_HEADER include/spdk/queue.h 00:02:55.665 TEST_HEADER include/spdk/reduce.h 00:02:55.665 TEST_HEADER include/spdk/rpc.h 00:02:55.665 TEST_HEADER include/spdk/scheduler.h 00:02:55.665 CC test/env/mem_callbacks/mem_callbacks.o 00:02:55.665 TEST_HEADER include/spdk/scsi.h 00:02:55.665 TEST_HEADER include/spdk/scsi_spec.h 00:02:55.665 CC test/lvol/esnap/esnap.o 00:02:55.665 TEST_HEADER include/spdk/sock.h 00:02:55.665 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:55.665 TEST_HEADER include/spdk/stdinc.h 00:02:55.665 TEST_HEADER include/spdk/string.h 00:02:55.665 TEST_HEADER include/spdk/thread.h 00:02:55.665 TEST_HEADER include/spdk/trace.h 00:02:55.665 TEST_HEADER include/spdk/trace_parser.h 00:02:55.665 LINK spdk_lspci 00:02:55.931 TEST_HEADER include/spdk/tree.h 00:02:55.931 TEST_HEADER include/spdk/ublk.h 00:02:55.931 TEST_HEADER include/spdk/util.h 00:02:55.931 TEST_HEADER include/spdk/uuid.h 00:02:55.931 TEST_HEADER include/spdk/version.h 00:02:55.931 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:55.931 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:55.931 TEST_HEADER include/spdk/vhost.h 00:02:55.931 TEST_HEADER include/spdk/vmd.h 00:02:55.931 LINK rpc_client_test 00:02:55.931 TEST_HEADER include/spdk/xor.h 00:02:55.931 TEST_HEADER include/spdk/zipf.h 00:02:55.931 CXX test/cpp_headers/accel.o 00:02:55.931 LINK spdk_nvme_discover 00:02:55.931 LINK reactor 00:02:55.931 LINK jsoncat 00:02:55.931 LINK histogram_perf 00:02:55.931 LINK lsvmd 00:02:55.931 LINK reactor_perf 00:02:55.931 LINK interrupt_tgt 00:02:55.931 LINK poller_perf 00:02:55.931 LINK app_repeat 00:02:55.931 LINK zipf 00:02:55.931 LINK nvmf_tgt 00:02:55.931 LINK event_perf 00:02:55.931 LINK iscsi_tgt 00:02:55.931 LINK stub 00:02:55.931 LINK vhost 00:02:55.931 LINK spdk_tgt 00:02:55.931 LINK spdk_trace_record 00:02:56.192 LINK bdev_svc 00:02:56.192 LINK hello_world 00:02:56.193 LINK ioat_perf 00:02:56.193 LINK mkfs 00:02:56.193 LINK sgl 00:02:56.193 LINK hello_sock 00:02:56.193 LINK scheduler 00:02:56.193 LINK reset 00:02:56.193 LINK nvme_dp 00:02:56.193 LINK hello_bdev 00:02:56.193 LINK thread 00:02:56.193 LINK hello_blob 00:02:56.193 CXX test/cpp_headers/accel_module.o 00:02:56.193 LINK aer 00:02:56.193 LINK nvmf 00:02:56.193 LINK spdk_dd 00:02:56.193 LINK idxd_perf 00:02:56.455 CXX test/cpp_headers/assert.o 00:02:56.455 CC test/env/vtophys/vtophys.o 00:02:56.455 CC examples/blob/cli/blobcli.o 00:02:56.455 CXX test/cpp_headers/barrier.o 00:02:56.455 LINK dif 00:02:56.455 LINK spdk_trace 00:02:56.455 CXX test/cpp_headers/base64.o 00:02:56.455 CXX test/cpp_headers/bdev.o 00:02:56.455 CC test/nvme/overhead/overhead.o 00:02:56.455 CC examples/nvme/reconnect/reconnect.o 00:02:56.455 LINK test_dma 00:02:56.455 LINK accel_perf 00:02:56.455 CC test/nvme/err_injection/err_injection.o 00:02:56.455 CC examples/bdev/bdevperf/bdevperf.o 00:02:56.455 CC examples/ioat/verify/verify.o 00:02:56.455 CC examples/vmd/led/led.o 00:02:56.455 LINK bdevio 00:02:56.455 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:56.723 CXX test/cpp_headers/bdev_module.o 00:02:56.723 CXX test/cpp_headers/bdev_zone.o 00:02:56.723 LINK nvme_fuzz 00:02:56.723 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:56.723 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:56.723 CC examples/nvme/arbitration/arbitration.o 00:02:56.723 LINK vtophys 00:02:56.723 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:56.723 CC examples/nvme/hotplug/hotplug.o 00:02:56.723 CC app/fio/bdev/fio_plugin.o 00:02:56.723 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:56.723 CC test/nvme/startup/startup.o 00:02:56.723 CC test/env/memory/memory_ut.o 00:02:56.723 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:56.723 CC examples/nvme/abort/abort.o 00:02:56.723 LINK spdk_nvme 00:02:56.723 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:56.723 CXX test/cpp_headers/bit_array.o 00:02:56.723 CC test/env/pci/pci_ut.o 00:02:56.723 CXX test/cpp_headers/bit_pool.o 00:02:56.723 CC test/nvme/reserve/reserve.o 00:02:56.723 CXX test/cpp_headers/blob_bdev.o 00:02:56.985 CXX test/cpp_headers/blobfs_bdev.o 00:02:56.985 CC test/nvme/connect_stress/connect_stress.o 00:02:56.985 CC test/nvme/simple_copy/simple_copy.o 00:02:56.985 CC test/nvme/boot_partition/boot_partition.o 00:02:56.985 CC test/nvme/compliance/nvme_compliance.o 00:02:56.985 CXX test/cpp_headers/blobfs.o 00:02:56.985 LINK led 00:02:56.985 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:56.985 CC test/nvme/fused_ordering/fused_ordering.o 00:02:56.985 CXX test/cpp_headers/blob.o 00:02:56.985 CC test/nvme/fdp/fdp.o 00:02:56.985 CXX test/cpp_headers/conf.o 00:02:56.985 CC test/nvme/cuse/cuse.o 00:02:56.985 CXX test/cpp_headers/config.o 00:02:56.985 LINK verify 00:02:56.985 LINK err_injection 00:02:56.985 LINK env_dpdk_post_init 00:02:56.985 CXX test/cpp_headers/cpuset.o 00:02:56.985 CXX test/cpp_headers/crc16.o 00:02:56.985 CXX test/cpp_headers/crc32.o 00:02:56.985 CXX test/cpp_headers/crc64.o 00:02:56.985 CXX test/cpp_headers/dif.o 00:02:56.985 CXX test/cpp_headers/dma.o 00:02:56.985 CXX test/cpp_headers/endian.o 00:02:56.985 LINK mem_callbacks 00:02:56.985 LINK overhead 00:02:57.244 LINK startup 00:02:57.244 LINK spdk_nvme_perf 00:02:57.244 LINK cmb_copy 00:02:57.244 CXX test/cpp_headers/env_dpdk.o 00:02:57.244 CXX test/cpp_headers/env.o 00:02:57.244 LINK pmr_persistence 00:02:57.244 LINK spdk_top 00:02:57.244 LINK spdk_nvme_identify 00:02:57.244 LINK reconnect 00:02:57.244 LINK boot_partition 00:02:57.244 LINK hotplug 00:02:57.244 CXX test/cpp_headers/event.o 00:02:57.244 CXX test/cpp_headers/fd_group.o 00:02:57.244 LINK connect_stress 00:02:57.244 LINK reserve 00:02:57.244 CXX test/cpp_headers/fd.o 00:02:57.244 CXX test/cpp_headers/file.o 00:02:57.244 LINK arbitration 00:02:57.244 CXX test/cpp_headers/ftl.o 00:02:57.244 CXX test/cpp_headers/gpt_spec.o 00:02:57.244 CXX test/cpp_headers/hexlify.o 00:02:57.244 CXX test/cpp_headers/histogram_data.o 00:02:57.244 CXX test/cpp_headers/idxd.o 00:02:57.508 CXX test/cpp_headers/idxd_spec.o 00:02:57.508 CXX test/cpp_headers/init.o 00:02:57.508 CXX test/cpp_headers/ioat.o 00:02:57.508 LINK doorbell_aers 00:02:57.508 LINK fused_ordering 00:02:57.508 CXX test/cpp_headers/ioat_spec.o 00:02:57.508 LINK simple_copy 00:02:57.508 LINK blobcli 00:02:57.508 CXX test/cpp_headers/iscsi_spec.o 00:02:57.508 CXX test/cpp_headers/json.o 00:02:57.508 CXX test/cpp_headers/jsonrpc.o 00:02:57.508 CXX test/cpp_headers/keyring.o 00:02:57.508 CXX test/cpp_headers/keyring_module.o 00:02:57.508 CXX test/cpp_headers/likely.o 00:02:57.508 LINK pci_ut 00:02:57.508 CXX test/cpp_headers/log.o 00:02:57.508 CXX test/cpp_headers/lvol.o 00:02:57.508 LINK nvme_compliance 00:02:57.508 CXX test/cpp_headers/memory.o 00:02:57.508 CXX test/cpp_headers/mmio.o 00:02:57.508 CXX test/cpp_headers/nbd.o 00:02:57.508 LINK abort 00:02:57.508 LINK vhost_fuzz 00:02:57.508 CXX test/cpp_headers/notify.o 00:02:57.508 CXX test/cpp_headers/nvme.o 00:02:57.508 CXX test/cpp_headers/nvme_intel.o 00:02:57.508 LINK fdp 00:02:57.508 CXX test/cpp_headers/nvme_ocssd.o 00:02:57.780 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:57.781 CXX test/cpp_headers/nvme_spec.o 00:02:57.781 CXX test/cpp_headers/nvme_zns.o 00:02:57.781 CXX test/cpp_headers/nvmf_cmd.o 00:02:57.781 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:57.781 CXX test/cpp_headers/nvmf.o 00:02:57.781 CXX test/cpp_headers/nvmf_spec.o 00:02:57.781 LINK spdk_bdev 00:02:57.781 CXX test/cpp_headers/nvmf_transport.o 00:02:57.781 LINK nvme_manage 00:02:57.781 CXX test/cpp_headers/opal.o 00:02:57.781 CXX test/cpp_headers/opal_spec.o 00:02:57.781 CXX test/cpp_headers/pci_ids.o 00:02:57.781 CXX test/cpp_headers/pipe.o 00:02:57.781 CXX test/cpp_headers/queue.o 00:02:57.781 CXX test/cpp_headers/reduce.o 00:02:57.781 CXX test/cpp_headers/rpc.o 00:02:57.781 CXX test/cpp_headers/scheduler.o 00:02:57.781 CXX test/cpp_headers/scsi.o 00:02:57.781 CXX test/cpp_headers/scsi_spec.o 00:02:57.781 CXX test/cpp_headers/sock.o 00:02:57.781 CXX test/cpp_headers/stdinc.o 00:02:57.781 CXX test/cpp_headers/string.o 00:02:57.781 CXX test/cpp_headers/thread.o 00:02:57.781 CXX test/cpp_headers/trace.o 00:02:57.781 CXX test/cpp_headers/trace_parser.o 00:02:57.781 CXX test/cpp_headers/tree.o 00:02:57.781 CXX test/cpp_headers/ublk.o 00:02:57.781 CXX test/cpp_headers/util.o 00:02:58.040 CXX test/cpp_headers/uuid.o 00:02:58.040 CXX test/cpp_headers/vfio_user_pci.o 00:02:58.040 CXX test/cpp_headers/version.o 00:02:58.040 CXX test/cpp_headers/vfio_user_spec.o 00:02:58.040 CXX test/cpp_headers/vhost.o 00:02:58.040 CXX test/cpp_headers/vmd.o 00:02:58.040 CXX test/cpp_headers/xor.o 00:02:58.040 CXX test/cpp_headers/zipf.o 00:02:58.040 LINK bdevperf 00:02:58.298 LINK memory_ut 00:02:58.890 LINK cuse 00:02:59.148 LINK iscsi_fuzz 00:03:04.407 LINK esnap 00:03:04.972 00:03:04.972 real 0m57.630s 00:03:04.972 user 10m50.693s 00:03:04.972 sys 2m31.821s 00:03:04.972 21:54:47 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:03:04.972 21:54:47 -- common/autotest_common.sh@10 -- $ set +x 00:03:04.972 ************************************ 00:03:04.972 END TEST make 00:03:04.972 ************************************ 00:03:04.972 21:54:47 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:04.972 21:54:47 -- pm/common@30 -- $ signal_monitor_resources TERM 00:03:04.972 21:54:47 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:03:04.972 21:54:47 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.972 21:54:47 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:04.972 21:54:47 -- pm/common@45 -- $ pid=3739716 00:03:04.972 21:54:47 -- pm/common@52 -- $ sudo kill -TERM 3739716 00:03:04.972 21:54:47 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.972 21:54:47 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:04.972 21:54:47 -- pm/common@45 -- $ pid=3739723 00:03:04.972 21:54:47 -- pm/common@52 -- $ sudo kill -TERM 3739723 00:03:04.972 21:54:47 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.972 21:54:47 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:04.972 21:54:47 -- pm/common@45 -- $ pid=3739720 00:03:04.972 21:54:47 -- pm/common@52 -- $ sudo kill -TERM 3739720 00:03:05.229 21:54:47 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.229 21:54:47 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:05.229 21:54:47 -- pm/common@45 -- $ pid=3739717 00:03:05.229 21:54:47 -- pm/common@52 -- $ sudo kill -TERM 3739717 00:03:05.229 21:54:47 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:05.229 21:54:47 -- nvmf/common.sh@7 -- # uname -s 00:03:05.229 21:54:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:05.229 21:54:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:05.229 21:54:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:05.229 21:54:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:05.229 21:54:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:05.229 21:54:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:05.229 21:54:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:05.229 21:54:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:05.229 21:54:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:05.229 21:54:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:05.487 21:54:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:03:05.487 21:54:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:03:05.487 21:54:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:05.487 21:54:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:05.487 21:54:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:05.487 21:54:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:05.487 21:54:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:05.487 21:54:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:05.487 21:54:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:05.487 21:54:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:05.487 21:54:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.487 21:54:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.487 21:54:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.487 21:54:47 -- paths/export.sh@5 -- # export PATH 00:03:05.487 21:54:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.487 21:54:47 -- nvmf/common.sh@47 -- # : 0 00:03:05.487 21:54:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:05.487 21:54:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:05.487 21:54:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:05.487 21:54:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:05.487 21:54:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:05.487 21:54:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:05.487 21:54:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:05.487 21:54:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:05.487 21:54:47 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:05.487 21:54:47 -- spdk/autotest.sh@32 -- # uname -s 00:03:05.487 21:54:47 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:05.487 21:54:47 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:05.487 21:54:47 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:05.487 21:54:47 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:05.487 21:54:47 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:05.487 21:54:47 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:05.487 21:54:47 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:05.487 21:54:47 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:05.487 21:54:47 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:05.487 21:54:47 -- spdk/autotest.sh@48 -- # udevadm_pid=3796755 00:03:05.487 21:54:47 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:05.487 21:54:47 -- pm/common@17 -- # local monitor 00:03:05.487 21:54:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.487 21:54:47 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3796757 00:03:05.487 21:54:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.487 21:54:47 -- pm/common@21 -- # date +%s 00:03:05.487 21:54:47 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3796760 00:03:05.487 21:54:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.487 21:54:47 -- pm/common@21 -- # date +%s 00:03:05.487 21:54:47 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3796764 00:03:05.487 21:54:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.487 21:54:47 -- pm/common@21 -- # date +%s 00:03:05.488 21:54:47 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3796768 00:03:05.488 21:54:47 -- pm/common@26 -- # sleep 1 00:03:05.488 21:54:47 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713988487 00:03:05.488 21:54:47 -- pm/common@21 -- # date +%s 00:03:05.488 21:54:47 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713988487 00:03:05.488 21:54:47 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713988487 00:03:05.488 21:54:47 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713988487 00:03:05.488 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713988487_collect-bmc-pm.bmc.pm.log 00:03:05.488 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713988487_collect-vmstat.pm.log 00:03:05.488 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713988487_collect-cpu-load.pm.log 00:03:05.488 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713988487_collect-cpu-temp.pm.log 00:03:06.422 21:54:48 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:06.422 21:54:48 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:06.422 21:54:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:06.422 21:54:48 -- common/autotest_common.sh@10 -- # set +x 00:03:06.422 21:54:48 -- spdk/autotest.sh@59 -- # create_test_list 00:03:06.422 21:54:48 -- common/autotest_common.sh@734 -- # xtrace_disable 00:03:06.422 21:54:48 -- common/autotest_common.sh@10 -- # set +x 00:03:06.422 21:54:48 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:06.422 21:54:48 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:06.422 21:54:48 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:06.422 21:54:48 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:06.422 21:54:48 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:06.422 21:54:48 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:06.422 21:54:48 -- common/autotest_common.sh@1441 -- # uname 00:03:06.422 21:54:48 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:03:06.422 21:54:48 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:06.422 21:54:48 -- common/autotest_common.sh@1461 -- # uname 00:03:06.422 21:54:48 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:03:06.422 21:54:48 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:06.422 21:54:48 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:06.422 21:54:48 -- spdk/autotest.sh@72 -- # hash lcov 00:03:06.422 21:54:48 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:06.422 21:54:48 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:06.422 --rc lcov_branch_coverage=1 00:03:06.422 --rc lcov_function_coverage=1 00:03:06.422 --rc genhtml_branch_coverage=1 00:03:06.422 --rc genhtml_function_coverage=1 00:03:06.422 --rc genhtml_legend=1 00:03:06.422 --rc geninfo_all_blocks=1 00:03:06.422 ' 00:03:06.422 21:54:48 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:06.422 --rc lcov_branch_coverage=1 00:03:06.422 --rc lcov_function_coverage=1 00:03:06.422 --rc genhtml_branch_coverage=1 00:03:06.422 --rc genhtml_function_coverage=1 00:03:06.422 --rc genhtml_legend=1 00:03:06.422 --rc geninfo_all_blocks=1 00:03:06.422 ' 00:03:06.422 21:54:48 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:06.422 --rc lcov_branch_coverage=1 00:03:06.422 --rc lcov_function_coverage=1 00:03:06.422 --rc genhtml_branch_coverage=1 00:03:06.422 --rc genhtml_function_coverage=1 00:03:06.422 --rc genhtml_legend=1 00:03:06.422 --rc geninfo_all_blocks=1 00:03:06.422 --no-external' 00:03:06.422 21:54:48 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:06.422 --rc lcov_branch_coverage=1 00:03:06.422 --rc lcov_function_coverage=1 00:03:06.422 --rc genhtml_branch_coverage=1 00:03:06.422 --rc genhtml_function_coverage=1 00:03:06.422 --rc genhtml_legend=1 00:03:06.422 --rc geninfo_all_blocks=1 00:03:06.422 --no-external' 00:03:06.422 21:54:48 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:06.680 lcov: LCOV version 1.14 00:03:06.680 21:54:48 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:28.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:28.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:28.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:28.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:28.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:28.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:28.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:28.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:28.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:28.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:28.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:28.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:28.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:28.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:28.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:28.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:28.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:28.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:28.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:28.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:28.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:28.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:28.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:28.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:28.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:28.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:28.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:28.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:28.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:28.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:28.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:28.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:28.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:28.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:28.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:28.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:28.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:28.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:28.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:28.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:28.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:28.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:28.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:28.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:28.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:28.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:28.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:28.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:28.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:28.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:28.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:28.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:28.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:28.606 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:28.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:28.607 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:28.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:28.608 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:28.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:28.608 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:28.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:28.608 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:28.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:28.608 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:28.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:28.608 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:28.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:28.608 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:28.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:28.608 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:28.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:28.608 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:28.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:28.608 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:28.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:28.608 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:38.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:38.573 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:10.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:10.638 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:10.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:10.638 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:10.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:10.638 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:28.772 21:56:07 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:28.772 21:56:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:28.772 21:56:07 -- common/autotest_common.sh@10 -- # set +x 00:04:28.772 21:56:07 -- spdk/autotest.sh@91 -- # rm -f 00:04:28.772 21:56:07 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:28.772 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:04:28.772 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:28.772 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:28.772 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:28.772 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:28.772 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:28.772 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:28.772 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:28.772 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:28.772 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:28.772 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:28.772 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:28.772 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:28.773 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:28.773 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:28.773 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:28.773 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:28.773 21:56:09 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:28.773 21:56:09 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:28.773 21:56:09 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:28.773 21:56:09 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:28.773 21:56:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:28.773 21:56:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:28.773 21:56:09 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:28.773 21:56:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:28.773 21:56:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:28.773 21:56:09 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:28.773 21:56:09 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:28.773 21:56:09 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:28.773 21:56:09 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:28.773 21:56:09 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:28.773 21:56:09 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:28.773 No valid GPT data, bailing 00:04:28.773 21:56:09 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:28.773 21:56:09 -- scripts/common.sh@391 -- # pt= 00:04:28.773 21:56:09 -- scripts/common.sh@392 -- # return 1 00:04:28.773 21:56:09 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:28.773 1+0 records in 00:04:28.773 1+0 records out 00:04:28.773 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00246634 s, 425 MB/s 00:04:28.773 21:56:09 -- spdk/autotest.sh@118 -- # sync 00:04:28.773 21:56:09 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:28.773 21:56:09 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:28.773 21:56:09 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:29.707 21:56:11 -- spdk/autotest.sh@124 -- # uname -s 00:04:29.707 21:56:11 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:29.707 21:56:11 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:29.707 21:56:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:29.707 21:56:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:29.707 21:56:11 -- common/autotest_common.sh@10 -- # set +x 00:04:29.707 ************************************ 00:04:29.707 START TEST setup.sh 00:04:29.707 ************************************ 00:04:29.707 21:56:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:29.707 * Looking for test storage... 00:04:29.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:29.707 21:56:11 -- setup/test-setup.sh@10 -- # uname -s 00:04:29.707 21:56:11 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:29.707 21:56:11 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:29.707 21:56:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:29.965 21:56:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:29.965 21:56:11 -- common/autotest_common.sh@10 -- # set +x 00:04:29.965 ************************************ 00:04:29.965 START TEST acl 00:04:29.965 ************************************ 00:04:29.965 21:56:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:29.965 * Looking for test storage... 00:04:29.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:29.965 21:56:12 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:29.965 21:56:12 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:29.965 21:56:12 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:29.965 21:56:12 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:29.965 21:56:12 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:29.965 21:56:12 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:29.965 21:56:12 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:29.965 21:56:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:29.965 21:56:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:29.965 21:56:12 -- setup/acl.sh@12 -- # devs=() 00:04:29.965 21:56:12 -- setup/acl.sh@12 -- # declare -a devs 00:04:29.965 21:56:12 -- setup/acl.sh@13 -- # drivers=() 00:04:29.965 21:56:12 -- setup/acl.sh@13 -- # declare -A drivers 00:04:29.965 21:56:12 -- setup/acl.sh@51 -- # setup reset 00:04:29.965 21:56:12 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.965 21:56:12 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:31.864 21:56:13 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:31.864 21:56:13 -- setup/acl.sh@16 -- # local dev driver 00:04:31.864 21:56:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.864 21:56:13 -- setup/acl.sh@15 -- # setup output status 00:04:31.864 21:56:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.864 21:56:13 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:32.798 Hugepages 00:04:32.798 node hugesize free / total 00:04:32.798 21:56:15 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:32.798 21:56:15 -- setup/acl.sh@19 -- # continue 00:04:32.798 21:56:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.798 21:56:15 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:32.798 21:56:15 -- setup/acl.sh@19 -- # continue 00:04:32.798 21:56:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.798 21:56:15 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:32.798 21:56:15 -- setup/acl.sh@19 -- # continue 00:04:32.798 21:56:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.798 00:04:32.798 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:32.798 21:56:15 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:32.798 21:56:15 -- setup/acl.sh@19 -- # continue 00:04:32.798 21:56:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.055 21:56:15 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # continue 00:04:33.055 21:56:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.055 21:56:15 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # continue 00:04:33.055 21:56:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.055 21:56:15 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # continue 00:04:33.055 21:56:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.055 21:56:15 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # continue 00:04:33.055 21:56:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.055 21:56:15 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # continue 00:04:33.055 21:56:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.055 21:56:15 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # continue 00:04:33.055 21:56:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.055 21:56:15 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # continue 00:04:33.055 21:56:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.055 21:56:15 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # continue 00:04:33.055 21:56:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.055 21:56:15 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # continue 00:04:33.055 21:56:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.055 21:56:15 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # continue 00:04:33.055 21:56:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.055 21:56:15 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # continue 00:04:33.055 21:56:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.055 21:56:15 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # continue 00:04:33.055 21:56:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.055 21:56:15 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # continue 00:04:33.055 21:56:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.055 21:56:15 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # continue 00:04:33.055 21:56:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.055 21:56:15 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.055 21:56:15 -- setup/acl.sh@20 -- # continue 00:04:33.055 21:56:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.056 21:56:15 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:33.056 21:56:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.056 21:56:15 -- setup/acl.sh@20 -- # continue 00:04:33.056 21:56:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.056 21:56:15 -- setup/acl.sh@19 -- # [[ 0000:82:00.0 == *:*:*.* ]] 00:04:33.056 21:56:15 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:33.056 21:56:15 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:04:33.056 21:56:15 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:33.056 21:56:15 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:33.056 21:56:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.056 21:56:15 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:33.056 21:56:15 -- setup/acl.sh@54 -- # run_test denied denied 00:04:33.056 21:56:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:33.056 21:56:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:33.056 21:56:15 -- common/autotest_common.sh@10 -- # set +x 00:04:33.056 ************************************ 00:04:33.056 START TEST denied 00:04:33.056 ************************************ 00:04:33.056 21:56:15 -- common/autotest_common.sh@1111 -- # denied 00:04:33.056 21:56:15 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:82:00.0' 00:04:33.056 21:56:15 -- setup/acl.sh@38 -- # setup output config 00:04:33.056 21:56:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.056 21:56:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:33.056 21:56:15 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:82:00.0' 00:04:34.952 0000:82:00.0 (8086 0a54): Skipping denied controller at 0000:82:00.0 00:04:34.952 21:56:16 -- setup/acl.sh@40 -- # verify 0000:82:00.0 00:04:34.952 21:56:16 -- setup/acl.sh@28 -- # local dev driver 00:04:34.952 21:56:16 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:34.952 21:56:16 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:82:00.0 ]] 00:04:34.952 21:56:16 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:82:00.0/driver 00:04:34.952 21:56:16 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:34.952 21:56:16 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:34.952 21:56:16 -- setup/acl.sh@41 -- # setup reset 00:04:34.952 21:56:16 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:34.952 21:56:16 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:36.851 00:04:36.851 real 0m3.739s 00:04:36.851 user 0m1.088s 00:04:36.851 sys 0m1.907s 00:04:36.851 21:56:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:36.851 21:56:19 -- common/autotest_common.sh@10 -- # set +x 00:04:36.851 ************************************ 00:04:36.851 END TEST denied 00:04:36.851 ************************************ 00:04:36.851 21:56:19 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:36.851 21:56:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:36.851 21:56:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:36.851 21:56:19 -- common/autotest_common.sh@10 -- # set +x 00:04:37.109 ************************************ 00:04:37.109 START TEST allowed 00:04:37.109 ************************************ 00:04:37.109 21:56:19 -- common/autotest_common.sh@1111 -- # allowed 00:04:37.109 21:56:19 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:82:00.0 00:04:37.109 21:56:19 -- setup/acl.sh@45 -- # setup output config 00:04:37.109 21:56:19 -- setup/acl.sh@46 -- # grep -E '0000:82:00.0 .*: nvme -> .*' 00:04:37.109 21:56:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.109 21:56:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:39.637 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:04:39.637 21:56:21 -- setup/acl.sh@47 -- # verify 00:04:39.637 21:56:21 -- setup/acl.sh@28 -- # local dev driver 00:04:39.637 21:56:21 -- setup/acl.sh@48 -- # setup reset 00:04:39.637 21:56:21 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:39.637 21:56:21 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:41.010 00:04:41.010 real 0m4.113s 00:04:41.010 user 0m1.083s 00:04:41.010 sys 0m1.973s 00:04:41.010 21:56:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:41.010 21:56:23 -- common/autotest_common.sh@10 -- # set +x 00:04:41.010 ************************************ 00:04:41.010 END TEST allowed 00:04:41.010 ************************************ 00:04:41.269 00:04:41.269 real 0m11.190s 00:04:41.269 user 0m3.446s 00:04:41.269 sys 0m6.025s 00:04:41.269 21:56:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:41.269 21:56:23 -- common/autotest_common.sh@10 -- # set +x 00:04:41.269 ************************************ 00:04:41.269 END TEST acl 00:04:41.269 ************************************ 00:04:41.269 21:56:23 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:41.269 21:56:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.269 21:56:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.269 21:56:23 -- common/autotest_common.sh@10 -- # set +x 00:04:41.269 ************************************ 00:04:41.269 START TEST hugepages 00:04:41.269 ************************************ 00:04:41.269 21:56:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:41.269 * Looking for test storage... 00:04:41.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:41.269 21:56:23 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:41.269 21:56:23 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:41.269 21:56:23 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:41.269 21:56:23 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:41.269 21:56:23 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:41.269 21:56:23 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:41.269 21:56:23 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:41.269 21:56:23 -- setup/common.sh@18 -- # local node= 00:04:41.269 21:56:23 -- setup/common.sh@19 -- # local var val 00:04:41.269 21:56:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.269 21:56:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.269 21:56:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.269 21:56:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.269 21:56:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.269 21:56:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 21:56:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026644 kB' 'MemFree: 23702536 kB' 'MemAvailable: 28881680 kB' 'Buffers: 2696 kB' 'Cached: 13223188 kB' 'SwapCached: 0 kB' 'Active: 9157936 kB' 'Inactive: 4630096 kB' 'Active(anon): 8587828 kB' 'Inactive(anon): 0 kB' 'Active(file): 570108 kB' 'Inactive(file): 4630096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565432 kB' 'Mapped: 218104 kB' 'Shmem: 8025680 kB' 'KReclaimable: 525280 kB' 'Slab: 901888 kB' 'SReclaimable: 525280 kB' 'SUnreclaim: 376608 kB' 'KernelStack: 12896 kB' 'PageTables: 9396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28304772 kB' 'Committed_AS: 9773376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196604 kB' 'VmallocChunk: 0 kB' 'Percpu: 44736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1707612 kB' 'DirectMap2M: 20232192 kB' 'DirectMap1G: 30408704 kB' 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 21:56:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # continue 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 21:56:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 21:56:23 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.270 21:56:23 -- setup/common.sh@33 -- # echo 2048 00:04:41.270 21:56:23 -- setup/common.sh@33 -- # return 0 00:04:41.270 21:56:23 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:41.270 21:56:23 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:41.270 21:56:23 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:41.270 21:56:23 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:41.270 21:56:23 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:41.270 21:56:23 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:41.270 21:56:23 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:41.270 21:56:23 -- setup/hugepages.sh@207 -- # get_nodes 00:04:41.270 21:56:23 -- setup/hugepages.sh@27 -- # local node 00:04:41.270 21:56:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.270 21:56:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:41.270 21:56:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.270 21:56:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:41.270 21:56:23 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:41.270 21:56:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:41.270 21:56:23 -- setup/hugepages.sh@208 -- # clear_hp 00:04:41.270 21:56:23 -- setup/hugepages.sh@37 -- # local node hp 00:04:41.270 21:56:23 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:41.270 21:56:23 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:41.270 21:56:23 -- setup/hugepages.sh@41 -- # echo 0 00:04:41.270 21:56:23 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:41.270 21:56:23 -- setup/hugepages.sh@41 -- # echo 0 00:04:41.270 21:56:23 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:41.270 21:56:23 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:41.270 21:56:23 -- setup/hugepages.sh@41 -- # echo 0 00:04:41.270 21:56:23 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:41.270 21:56:23 -- setup/hugepages.sh@41 -- # echo 0 00:04:41.270 21:56:23 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:41.270 21:56:23 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:41.270 21:56:23 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:41.528 21:56:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.528 21:56:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.528 21:56:23 -- common/autotest_common.sh@10 -- # set +x 00:04:41.528 ************************************ 00:04:41.528 START TEST default_setup 00:04:41.528 ************************************ 00:04:41.528 21:56:23 -- common/autotest_common.sh@1111 -- # default_setup 00:04:41.528 21:56:23 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:41.528 21:56:23 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:41.528 21:56:23 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:41.528 21:56:23 -- setup/hugepages.sh@51 -- # shift 00:04:41.528 21:56:23 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:41.528 21:56:23 -- setup/hugepages.sh@52 -- # local node_ids 00:04:41.528 21:56:23 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:41.528 21:56:23 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:41.528 21:56:23 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:41.528 21:56:23 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:41.528 21:56:23 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.528 21:56:23 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:41.528 21:56:23 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:41.528 21:56:23 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.528 21:56:23 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.528 21:56:23 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:41.528 21:56:23 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:41.528 21:56:23 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:41.528 21:56:23 -- setup/hugepages.sh@73 -- # return 0 00:04:41.528 21:56:23 -- setup/hugepages.sh@137 -- # setup output 00:04:41.528 21:56:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.528 21:56:23 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:42.901 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:42.901 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:42.901 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:42.901 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:42.901 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:42.901 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:42.901 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:42.901 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:42.901 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:42.901 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:42.901 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:42.901 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:42.901 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:42.901 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:42.901 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:42.901 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:43.836 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:04:44.097 21:56:26 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:44.097 21:56:26 -- setup/hugepages.sh@89 -- # local node 00:04:44.097 21:56:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:44.097 21:56:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:44.097 21:56:26 -- setup/hugepages.sh@92 -- # local surp 00:04:44.097 21:56:26 -- setup/hugepages.sh@93 -- # local resv 00:04:44.097 21:56:26 -- setup/hugepages.sh@94 -- # local anon 00:04:44.097 21:56:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:44.097 21:56:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:44.097 21:56:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:44.097 21:56:26 -- setup/common.sh@18 -- # local node= 00:04:44.097 21:56:26 -- setup/common.sh@19 -- # local var val 00:04:44.097 21:56:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.097 21:56:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.097 21:56:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.097 21:56:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.097 21:56:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.097 21:56:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.097 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 21:56:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026644 kB' 'MemFree: 25794668 kB' 'MemAvailable: 30973812 kB' 'Buffers: 2696 kB' 'Cached: 13223284 kB' 'SwapCached: 0 kB' 'Active: 9177480 kB' 'Inactive: 4630096 kB' 'Active(anon): 8607372 kB' 'Inactive(anon): 0 kB' 'Active(file): 570108 kB' 'Inactive(file): 4630096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584780 kB' 'Mapped: 218132 kB' 'Shmem: 8025776 kB' 'KReclaimable: 525280 kB' 'Slab: 901148 kB' 'SReclaimable: 525280 kB' 'SUnreclaim: 375868 kB' 'KernelStack: 13088 kB' 'PageTables: 9876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353348 kB' 'Committed_AS: 9798392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196844 kB' 'VmallocChunk: 0 kB' 'Percpu: 44736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1707612 kB' 'DirectMap2M: 20232192 kB' 'DirectMap1G: 30408704 kB' 00:04:44.097 21:56:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.097 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.097 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 21:56:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.097 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.097 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 21:56:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.097 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.097 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 21:56:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.097 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.097 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 21:56:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.097 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.097 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 21:56:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.097 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.097 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 21:56:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.097 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.097 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 21:56:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.097 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.097 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 21:56:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.097 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.097 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 21:56:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.097 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.097 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 21:56:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.097 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.097 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.097 21:56:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.097 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.097 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.097 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.098 21:56:26 -- setup/common.sh@33 -- # echo 0 00:04:44.098 21:56:26 -- setup/common.sh@33 -- # return 0 00:04:44.098 21:56:26 -- setup/hugepages.sh@97 -- # anon=0 00:04:44.098 21:56:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:44.098 21:56:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.098 21:56:26 -- setup/common.sh@18 -- # local node= 00:04:44.098 21:56:26 -- setup/common.sh@19 -- # local var val 00:04:44.098 21:56:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.098 21:56:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.098 21:56:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.098 21:56:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.098 21:56:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.098 21:56:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026644 kB' 'MemFree: 25794912 kB' 'MemAvailable: 30974056 kB' 'Buffers: 2696 kB' 'Cached: 13223288 kB' 'SwapCached: 0 kB' 'Active: 9178644 kB' 'Inactive: 4630096 kB' 'Active(anon): 8608536 kB' 'Inactive(anon): 0 kB' 'Active(file): 570108 kB' 'Inactive(file): 4630096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586044 kB' 'Mapped: 218188 kB' 'Shmem: 8025780 kB' 'KReclaimable: 525280 kB' 'Slab: 901260 kB' 'SReclaimable: 525280 kB' 'SUnreclaim: 375980 kB' 'KernelStack: 13456 kB' 'PageTables: 10988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353348 kB' 'Committed_AS: 9798408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196924 kB' 'VmallocChunk: 0 kB' 'Percpu: 44736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1707612 kB' 'DirectMap2M: 20232192 kB' 'DirectMap1G: 30408704 kB' 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.098 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.098 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.099 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.099 21:56:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.100 21:56:26 -- setup/common.sh@33 -- # echo 0 00:04:44.100 21:56:26 -- setup/common.sh@33 -- # return 0 00:04:44.100 21:56:26 -- setup/hugepages.sh@99 -- # surp=0 00:04:44.100 21:56:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:44.100 21:56:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:44.100 21:56:26 -- setup/common.sh@18 -- # local node= 00:04:44.100 21:56:26 -- setup/common.sh@19 -- # local var val 00:04:44.100 21:56:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.100 21:56:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.100 21:56:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.100 21:56:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.100 21:56:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.100 21:56:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026644 kB' 'MemFree: 25796660 kB' 'MemAvailable: 30975804 kB' 'Buffers: 2696 kB' 'Cached: 13223300 kB' 'SwapCached: 0 kB' 'Active: 9176508 kB' 'Inactive: 4630096 kB' 'Active(anon): 8606400 kB' 'Inactive(anon): 0 kB' 'Active(file): 570108 kB' 'Inactive(file): 4630096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583820 kB' 'Mapped: 218072 kB' 'Shmem: 8025792 kB' 'KReclaimable: 525280 kB' 'Slab: 901324 kB' 'SReclaimable: 525280 kB' 'SUnreclaim: 376044 kB' 'KernelStack: 12848 kB' 'PageTables: 9336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353348 kB' 'Committed_AS: 9797392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196684 kB' 'VmallocChunk: 0 kB' 'Percpu: 44736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1707612 kB' 'DirectMap2M: 20232192 kB' 'DirectMap1G: 30408704 kB' 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.100 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.100 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.101 21:56:26 -- setup/common.sh@33 -- # echo 0 00:04:44.101 21:56:26 -- setup/common.sh@33 -- # return 0 00:04:44.101 21:56:26 -- setup/hugepages.sh@100 -- # resv=0 00:04:44.101 21:56:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:44.101 nr_hugepages=1024 00:04:44.101 21:56:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:44.101 resv_hugepages=0 00:04:44.101 21:56:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:44.101 surplus_hugepages=0 00:04:44.101 21:56:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:44.101 anon_hugepages=0 00:04:44.101 21:56:26 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.101 21:56:26 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:44.101 21:56:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:44.101 21:56:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:44.101 21:56:26 -- setup/common.sh@18 -- # local node= 00:04:44.101 21:56:26 -- setup/common.sh@19 -- # local var val 00:04:44.101 21:56:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.101 21:56:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.101 21:56:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.101 21:56:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.101 21:56:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.101 21:56:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026644 kB' 'MemFree: 25796660 kB' 'MemAvailable: 30975804 kB' 'Buffers: 2696 kB' 'Cached: 13223312 kB' 'SwapCached: 0 kB' 'Active: 9177204 kB' 'Inactive: 4630096 kB' 'Active(anon): 8607096 kB' 'Inactive(anon): 0 kB' 'Active(file): 570108 kB' 'Inactive(file): 4630096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584532 kB' 'Mapped: 218072 kB' 'Shmem: 8025804 kB' 'KReclaimable: 525280 kB' 'Slab: 901324 kB' 'SReclaimable: 525280 kB' 'SUnreclaim: 376044 kB' 'KernelStack: 12880 kB' 'PageTables: 9464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353348 kB' 'Committed_AS: 9823816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196700 kB' 'VmallocChunk: 0 kB' 'Percpu: 44736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1707612 kB' 'DirectMap2M: 20232192 kB' 'DirectMap1G: 30408704 kB' 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.101 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.101 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.102 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.102 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 21:56:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.103 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.103 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 21:56:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.103 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.103 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 21:56:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.103 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.103 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 21:56:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.103 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.103 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 21:56:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.103 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.103 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 21:56:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.103 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.103 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 21:56:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.103 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.103 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 21:56:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.103 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.103 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 21:56:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.103 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.103 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 21:56:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.103 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.103 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 21:56:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.103 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.103 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 21:56:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.103 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.103 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 21:56:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.103 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.103 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.103 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.103 21:56:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.103 21:56:26 -- setup/common.sh@33 -- # echo 1024 00:04:44.103 21:56:26 -- setup/common.sh@33 -- # return 0 00:04:44.362 21:56:26 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.362 21:56:26 -- setup/hugepages.sh@112 -- # get_nodes 00:04:44.362 21:56:26 -- setup/hugepages.sh@27 -- # local node 00:04:44.362 21:56:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.362 21:56:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:44.362 21:56:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.362 21:56:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:44.362 21:56:26 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:44.362 21:56:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:44.362 21:56:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.362 21:56:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.362 21:56:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:44.362 21:56:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.362 21:56:26 -- setup/common.sh@18 -- # local node=0 00:04:44.362 21:56:26 -- setup/common.sh@19 -- # local var val 00:04:44.362 21:56:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.362 21:56:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.362 21:56:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:44.362 21:56:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:44.362 21:56:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.362 21:56:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 15652168 kB' 'MemUsed: 8967244 kB' 'SwapCached: 0 kB' 'Active: 5828880 kB' 'Inactive: 331748 kB' 'Active(anon): 5417360 kB' 'Inactive(anon): 0 kB' 'Active(file): 411520 kB' 'Inactive(file): 331748 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5732356 kB' 'Mapped: 156332 kB' 'AnonPages: 431440 kB' 'Shmem: 4989088 kB' 'KernelStack: 7256 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 137292 kB' 'Slab: 316412 kB' 'SReclaimable: 137292 kB' 'SUnreclaim: 179120 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # continue 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.363 21:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.363 21:56:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.364 21:56:26 -- setup/common.sh@33 -- # echo 0 00:04:44.364 21:56:26 -- setup/common.sh@33 -- # return 0 00:04:44.364 21:56:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.364 21:56:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.364 21:56:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.364 21:56:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.364 21:56:26 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:44.364 node0=1024 expecting 1024 00:04:44.364 21:56:26 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:44.364 00:04:44.364 real 0m2.752s 00:04:44.364 user 0m0.729s 00:04:44.364 sys 0m1.033s 00:04:44.364 21:56:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:44.364 21:56:26 -- common/autotest_common.sh@10 -- # set +x 00:04:44.364 ************************************ 00:04:44.364 END TEST default_setup 00:04:44.364 ************************************ 00:04:44.364 21:56:26 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:44.364 21:56:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:44.364 21:56:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:44.364 21:56:26 -- common/autotest_common.sh@10 -- # set +x 00:04:44.364 ************************************ 00:04:44.364 START TEST per_node_1G_alloc 00:04:44.364 ************************************ 00:04:44.364 21:56:26 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:04:44.364 21:56:26 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:44.364 21:56:26 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:44.364 21:56:26 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:44.364 21:56:26 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:44.364 21:56:26 -- setup/hugepages.sh@51 -- # shift 00:04:44.364 21:56:26 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:44.364 21:56:26 -- setup/hugepages.sh@52 -- # local node_ids 00:04:44.364 21:56:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:44.364 21:56:26 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:44.364 21:56:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:44.364 21:56:26 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:44.364 21:56:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:44.364 21:56:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:44.364 21:56:26 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:44.364 21:56:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:44.364 21:56:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:44.364 21:56:26 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:44.364 21:56:26 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:44.364 21:56:26 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:44.364 21:56:26 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:44.364 21:56:26 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:44.364 21:56:26 -- setup/hugepages.sh@73 -- # return 0 00:04:44.364 21:56:26 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:44.364 21:56:26 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:44.364 21:56:26 -- setup/hugepages.sh@146 -- # setup output 00:04:44.364 21:56:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.364 21:56:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:45.796 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:45.796 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:45.796 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:45.796 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:45.796 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:45.796 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:45.796 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:45.796 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:45.796 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:45.796 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:45.796 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:45.796 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:45.796 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:45.796 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:45.796 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:45.796 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:45.796 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:45.796 21:56:27 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:45.796 21:56:27 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:45.796 21:56:27 -- setup/hugepages.sh@89 -- # local node 00:04:45.796 21:56:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:45.796 21:56:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:45.796 21:56:27 -- setup/hugepages.sh@92 -- # local surp 00:04:45.796 21:56:27 -- setup/hugepages.sh@93 -- # local resv 00:04:45.796 21:56:27 -- setup/hugepages.sh@94 -- # local anon 00:04:45.796 21:56:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:45.796 21:56:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:45.796 21:56:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:45.796 21:56:27 -- setup/common.sh@18 -- # local node= 00:04:45.796 21:56:27 -- setup/common.sh@19 -- # local var val 00:04:45.796 21:56:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.796 21:56:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.796 21:56:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.796 21:56:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.797 21:56:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.797 21:56:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026644 kB' 'MemFree: 25802252 kB' 'MemAvailable: 30981396 kB' 'Buffers: 2696 kB' 'Cached: 13223368 kB' 'SwapCached: 0 kB' 'Active: 9177048 kB' 'Inactive: 4630096 kB' 'Active(anon): 8606940 kB' 'Inactive(anon): 0 kB' 'Active(file): 570108 kB' 'Inactive(file): 4630096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584344 kB' 'Mapped: 218184 kB' 'Shmem: 8025860 kB' 'KReclaimable: 525280 kB' 'Slab: 901316 kB' 'SReclaimable: 525280 kB' 'SUnreclaim: 376036 kB' 'KernelStack: 12864 kB' 'PageTables: 9360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353348 kB' 'Committed_AS: 9797876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196748 kB' 'VmallocChunk: 0 kB' 'Percpu: 44736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1707612 kB' 'DirectMap2M: 20232192 kB' 'DirectMap1G: 30408704 kB' 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.797 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.797 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.798 21:56:27 -- setup/common.sh@33 -- # echo 0 00:04:45.798 21:56:27 -- setup/common.sh@33 -- # return 0 00:04:45.798 21:56:27 -- setup/hugepages.sh@97 -- # anon=0 00:04:45.798 21:56:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:45.798 21:56:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.798 21:56:27 -- setup/common.sh@18 -- # local node= 00:04:45.798 21:56:27 -- setup/common.sh@19 -- # local var val 00:04:45.798 21:56:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.798 21:56:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.798 21:56:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.798 21:56:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.798 21:56:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.798 21:56:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026644 kB' 'MemFree: 25815276 kB' 'MemAvailable: 30994420 kB' 'Buffers: 2696 kB' 'Cached: 13223368 kB' 'SwapCached: 0 kB' 'Active: 9177460 kB' 'Inactive: 4630096 kB' 'Active(anon): 8607352 kB' 'Inactive(anon): 0 kB' 'Active(file): 570108 kB' 'Inactive(file): 4630096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584852 kB' 'Mapped: 218260 kB' 'Shmem: 8025860 kB' 'KReclaimable: 525280 kB' 'Slab: 901324 kB' 'SReclaimable: 525280 kB' 'SUnreclaim: 376044 kB' 'KernelStack: 12848 kB' 'PageTables: 9284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353348 kB' 'Committed_AS: 9797888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196700 kB' 'VmallocChunk: 0 kB' 'Percpu: 44736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1707612 kB' 'DirectMap2M: 20232192 kB' 'DirectMap1G: 30408704 kB' 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.798 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.798 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.799 21:56:27 -- setup/common.sh@33 -- # echo 0 00:04:45.799 21:56:27 -- setup/common.sh@33 -- # return 0 00:04:45.799 21:56:27 -- setup/hugepages.sh@99 -- # surp=0 00:04:45.799 21:56:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:45.799 21:56:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:45.799 21:56:27 -- setup/common.sh@18 -- # local node= 00:04:45.799 21:56:27 -- setup/common.sh@19 -- # local var val 00:04:45.799 21:56:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.799 21:56:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.799 21:56:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.799 21:56:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.799 21:56:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.799 21:56:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026644 kB' 'MemFree: 25815340 kB' 'MemAvailable: 30994484 kB' 'Buffers: 2696 kB' 'Cached: 13223384 kB' 'SwapCached: 0 kB' 'Active: 9176732 kB' 'Inactive: 4630096 kB' 'Active(anon): 8606624 kB' 'Inactive(anon): 0 kB' 'Active(file): 570108 kB' 'Inactive(file): 4630096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584060 kB' 'Mapped: 218104 kB' 'Shmem: 8025876 kB' 'KReclaimable: 525280 kB' 'Slab: 901332 kB' 'SReclaimable: 525280 kB' 'SUnreclaim: 376052 kB' 'KernelStack: 12864 kB' 'PageTables: 9324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353348 kB' 'Committed_AS: 9797900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196700 kB' 'VmallocChunk: 0 kB' 'Percpu: 44736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1707612 kB' 'DirectMap2M: 20232192 kB' 'DirectMap1G: 30408704 kB' 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.799 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.799 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:27 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.800 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.800 21:56:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.800 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.800 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.801 21:56:28 -- setup/common.sh@33 -- # echo 0 00:04:45.801 21:56:28 -- setup/common.sh@33 -- # return 0 00:04:45.801 21:56:28 -- setup/hugepages.sh@100 -- # resv=0 00:04:45.801 21:56:28 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:45.801 nr_hugepages=1024 00:04:45.801 21:56:28 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:45.801 resv_hugepages=0 00:04:45.801 21:56:28 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:45.801 surplus_hugepages=0 00:04:45.801 21:56:28 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:45.801 anon_hugepages=0 00:04:45.801 21:56:28 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.801 21:56:28 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:45.801 21:56:28 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:45.801 21:56:28 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:45.801 21:56:28 -- setup/common.sh@18 -- # local node= 00:04:45.801 21:56:28 -- setup/common.sh@19 -- # local var val 00:04:45.801 21:56:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.801 21:56:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.801 21:56:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.801 21:56:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.801 21:56:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.801 21:56:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.801 21:56:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026644 kB' 'MemFree: 25814584 kB' 'MemAvailable: 30993728 kB' 'Buffers: 2696 kB' 'Cached: 13223400 kB' 'SwapCached: 0 kB' 'Active: 9176768 kB' 'Inactive: 4630096 kB' 'Active(anon): 8606660 kB' 'Inactive(anon): 0 kB' 'Active(file): 570108 kB' 'Inactive(file): 4630096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584064 kB' 'Mapped: 218104 kB' 'Shmem: 8025892 kB' 'KReclaimable: 525280 kB' 'Slab: 901324 kB' 'SReclaimable: 525280 kB' 'SUnreclaim: 376044 kB' 'KernelStack: 12864 kB' 'PageTables: 9324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353348 kB' 'Committed_AS: 9797916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196700 kB' 'VmallocChunk: 0 kB' 'Percpu: 44736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1707612 kB' 'DirectMap2M: 20232192 kB' 'DirectMap1G: 30408704 kB' 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.801 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.801 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.802 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.802 21:56:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.802 21:56:28 -- setup/common.sh@33 -- # echo 1024 00:04:45.802 21:56:28 -- setup/common.sh@33 -- # return 0 00:04:45.802 21:56:28 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.802 21:56:28 -- setup/hugepages.sh@112 -- # get_nodes 00:04:45.802 21:56:28 -- setup/hugepages.sh@27 -- # local node 00:04:45.802 21:56:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.802 21:56:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:45.802 21:56:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.802 21:56:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:45.802 21:56:28 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:45.802 21:56:28 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:45.802 21:56:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.802 21:56:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.802 21:56:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:45.802 21:56:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.802 21:56:28 -- setup/common.sh@18 -- # local node=0 00:04:45.802 21:56:28 -- setup/common.sh@19 -- # local var val 00:04:45.802 21:56:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.802 21:56:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.802 21:56:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:45.803 21:56:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:45.803 21:56:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.803 21:56:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.803 21:56:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 16715668 kB' 'MemUsed: 7903744 kB' 'SwapCached: 0 kB' 'Active: 5828692 kB' 'Inactive: 331748 kB' 'Active(anon): 5417172 kB' 'Inactive(anon): 0 kB' 'Active(file): 411520 kB' 'Inactive(file): 331748 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5732428 kB' 'Mapped: 156220 kB' 'AnonPages: 431248 kB' 'Shmem: 4989160 kB' 'KernelStack: 7272 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 137292 kB' 'Slab: 316464 kB' 'SReclaimable: 137292 kB' 'SUnreclaim: 179172 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.803 21:56:28 -- setup/common.sh@32 -- # continue 00:04:45.803 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.063 21:56:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.063 21:56:28 -- setup/common.sh@33 -- # echo 0 00:04:46.063 21:56:28 -- setup/common.sh@33 -- # return 0 00:04:46.063 21:56:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.063 21:56:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.063 21:56:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.063 21:56:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:46.063 21:56:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.063 21:56:28 -- setup/common.sh@18 -- # local node=1 00:04:46.063 21:56:28 -- setup/common.sh@19 -- # local var val 00:04:46.063 21:56:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.063 21:56:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.063 21:56:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:46.063 21:56:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:46.063 21:56:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.063 21:56:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.063 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407232 kB' 'MemFree: 9099080 kB' 'MemUsed: 10308152 kB' 'SwapCached: 0 kB' 'Active: 3348044 kB' 'Inactive: 4298348 kB' 'Active(anon): 3189456 kB' 'Inactive(anon): 0 kB' 'Active(file): 158588 kB' 'Inactive(file): 4298348 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7493680 kB' 'Mapped: 61884 kB' 'AnonPages: 152772 kB' 'Shmem: 3036744 kB' 'KernelStack: 5576 kB' 'PageTables: 4884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 387988 kB' 'Slab: 584860 kB' 'SReclaimable: 387988 kB' 'SUnreclaim: 196872 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # continue 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.064 21:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.064 21:56:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.064 21:56:28 -- setup/common.sh@33 -- # echo 0 00:04:46.064 21:56:28 -- setup/common.sh@33 -- # return 0 00:04:46.064 21:56:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.064 21:56:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.064 21:56:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.064 21:56:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.064 21:56:28 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:46.064 node0=512 expecting 512 00:04:46.064 21:56:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.065 21:56:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.065 21:56:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.065 21:56:28 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:46.065 node1=512 expecting 512 00:04:46.065 21:56:28 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:46.065 00:04:46.065 real 0m1.573s 00:04:46.065 user 0m0.656s 00:04:46.065 sys 0m0.890s 00:04:46.065 21:56:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:46.065 21:56:28 -- common/autotest_common.sh@10 -- # set +x 00:04:46.065 ************************************ 00:04:46.065 END TEST per_node_1G_alloc 00:04:46.065 ************************************ 00:04:46.065 21:56:28 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:46.065 21:56:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.065 21:56:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.065 21:56:28 -- common/autotest_common.sh@10 -- # set +x 00:04:46.065 ************************************ 00:04:46.065 START TEST even_2G_alloc 00:04:46.065 ************************************ 00:04:46.065 21:56:28 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:04:46.065 21:56:28 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:46.065 21:56:28 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:46.065 21:56:28 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:46.065 21:56:28 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:46.065 21:56:28 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:46.065 21:56:28 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:46.065 21:56:28 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:46.065 21:56:28 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:46.065 21:56:28 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:46.065 21:56:28 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:46.065 21:56:28 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:46.065 21:56:28 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:46.065 21:56:28 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:46.065 21:56:28 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:46.065 21:56:28 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.065 21:56:28 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:46.065 21:56:28 -- setup/hugepages.sh@83 -- # : 512 00:04:46.065 21:56:28 -- setup/hugepages.sh@84 -- # : 1 00:04:46.065 21:56:28 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.065 21:56:28 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:46.065 21:56:28 -- setup/hugepages.sh@83 -- # : 0 00:04:46.065 21:56:28 -- setup/hugepages.sh@84 -- # : 0 00:04:46.065 21:56:28 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.065 21:56:28 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:46.065 21:56:28 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:46.065 21:56:28 -- setup/hugepages.sh@153 -- # setup output 00:04:46.065 21:56:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.065 21:56:28 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:47.478 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:47.478 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:47.478 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:47.478 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:47.478 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:47.478 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:47.478 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:47.478 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:47.478 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:47.478 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:47.478 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:47.478 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:47.478 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:47.478 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:47.478 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:47.478 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:47.478 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:47.478 21:56:29 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:47.478 21:56:29 -- setup/hugepages.sh@89 -- # local node 00:04:47.478 21:56:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:47.478 21:56:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:47.478 21:56:29 -- setup/hugepages.sh@92 -- # local surp 00:04:47.478 21:56:29 -- setup/hugepages.sh@93 -- # local resv 00:04:47.478 21:56:29 -- setup/hugepages.sh@94 -- # local anon 00:04:47.478 21:56:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:47.478 21:56:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:47.478 21:56:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:47.478 21:56:29 -- setup/common.sh@18 -- # local node= 00:04:47.478 21:56:29 -- setup/common.sh@19 -- # local var val 00:04:47.478 21:56:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.478 21:56:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.478 21:56:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.478 21:56:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.478 21:56:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.478 21:56:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 21:56:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026644 kB' 'MemFree: 25825608 kB' 'MemAvailable: 31004752 kB' 'Buffers: 2696 kB' 'Cached: 13223468 kB' 'SwapCached: 0 kB' 'Active: 9183616 kB' 'Inactive: 4630096 kB' 'Active(anon): 8613508 kB' 'Inactive(anon): 0 kB' 'Active(file): 570108 kB' 'Inactive(file): 4630096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 590784 kB' 'Mapped: 218540 kB' 'Shmem: 8025960 kB' 'KReclaimable: 525280 kB' 'Slab: 901312 kB' 'SReclaimable: 525280 kB' 'SUnreclaim: 376032 kB' 'KernelStack: 13312 kB' 'PageTables: 10964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353348 kB' 'Committed_AS: 9805468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196892 kB' 'VmallocChunk: 0 kB' 'Percpu: 44736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1707612 kB' 'DirectMap2M: 20232192 kB' 'DirectMap1G: 30408704 kB' 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.478 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.478 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.479 21:56:29 -- setup/common.sh@33 -- # echo 0 00:04:47.479 21:56:29 -- setup/common.sh@33 -- # return 0 00:04:47.479 21:56:29 -- setup/hugepages.sh@97 -- # anon=0 00:04:47.479 21:56:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:47.479 21:56:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.479 21:56:29 -- setup/common.sh@18 -- # local node= 00:04:47.479 21:56:29 -- setup/common.sh@19 -- # local var val 00:04:47.479 21:56:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.479 21:56:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.479 21:56:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.479 21:56:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.479 21:56:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.479 21:56:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.479 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.479 21:56:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026644 kB' 'MemFree: 25825312 kB' 'MemAvailable: 31004456 kB' 'Buffers: 2696 kB' 'Cached: 13223468 kB' 'SwapCached: 0 kB' 'Active: 9183224 kB' 'Inactive: 4630096 kB' 'Active(anon): 8613116 kB' 'Inactive(anon): 0 kB' 'Active(file): 570108 kB' 'Inactive(file): 4630096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 590460 kB' 'Mapped: 218932 kB' 'Shmem: 8025960 kB' 'KReclaimable: 525280 kB' 'Slab: 901280 kB' 'SReclaimable: 525280 kB' 'SUnreclaim: 376000 kB' 'KernelStack: 12928 kB' 'PageTables: 9208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353348 kB' 'Committed_AS: 9804272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196752 kB' 'VmallocChunk: 0 kB' 'Percpu: 44736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1707612 kB' 'DirectMap2M: 20232192 kB' 'DirectMap1G: 30408704 kB' 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.479 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.741 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 21:56:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.741 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.741 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 21:56:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.741 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.741 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 21:56:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.741 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.741 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 21:56:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.741 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.741 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 21:56:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.741 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.741 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 21:56:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.741 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.741 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 21:56:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.741 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.741 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.741 21:56:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.741 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.741 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.741 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.742 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.742 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.743 21:56:29 -- setup/common.sh@33 -- # echo 0 00:04:47.743 21:56:29 -- setup/common.sh@33 -- # return 0 00:04:47.743 21:56:29 -- setup/hugepages.sh@99 -- # surp=0 00:04:47.743 21:56:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:47.743 21:56:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:47.743 21:56:29 -- setup/common.sh@18 -- # local node= 00:04:47.743 21:56:29 -- setup/common.sh@19 -- # local var val 00:04:47.743 21:56:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.743 21:56:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.743 21:56:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.743 21:56:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.743 21:56:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.743 21:56:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026644 kB' 'MemFree: 25826388 kB' 'MemAvailable: 31005532 kB' 'Buffers: 2696 kB' 'Cached: 13223480 kB' 'SwapCached: 0 kB' 'Active: 9176784 kB' 'Inactive: 4630096 kB' 'Active(anon): 8606676 kB' 'Inactive(anon): 0 kB' 'Active(file): 570108 kB' 'Inactive(file): 4630096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584084 kB' 'Mapped: 218144 kB' 'Shmem: 8025972 kB' 'KReclaimable: 525280 kB' 'Slab: 901328 kB' 'SReclaimable: 525280 kB' 'SUnreclaim: 376048 kB' 'KernelStack: 12880 kB' 'PageTables: 9288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353348 kB' 'Committed_AS: 9798164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196732 kB' 'VmallocChunk: 0 kB' 'Percpu: 44736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1707612 kB' 'DirectMap2M: 20232192 kB' 'DirectMap1G: 30408704 kB' 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.743 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.743 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.744 21:56:29 -- setup/common.sh@33 -- # echo 0 00:04:47.744 21:56:29 -- setup/common.sh@33 -- # return 0 00:04:47.744 21:56:29 -- setup/hugepages.sh@100 -- # resv=0 00:04:47.744 21:56:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:47.744 nr_hugepages=1024 00:04:47.744 21:56:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:47.744 resv_hugepages=0 00:04:47.744 21:56:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:47.744 surplus_hugepages=0 00:04:47.744 21:56:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:47.744 anon_hugepages=0 00:04:47.744 21:56:29 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:47.744 21:56:29 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:47.744 21:56:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:47.744 21:56:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:47.744 21:56:29 -- setup/common.sh@18 -- # local node= 00:04:47.744 21:56:29 -- setup/common.sh@19 -- # local var val 00:04:47.744 21:56:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.744 21:56:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.744 21:56:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.744 21:56:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.744 21:56:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.744 21:56:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026644 kB' 'MemFree: 25826388 kB' 'MemAvailable: 31005532 kB' 'Buffers: 2696 kB' 'Cached: 13223496 kB' 'SwapCached: 0 kB' 'Active: 9176720 kB' 'Inactive: 4630096 kB' 'Active(anon): 8606612 kB' 'Inactive(anon): 0 kB' 'Active(file): 570108 kB' 'Inactive(file): 4630096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583944 kB' 'Mapped: 218144 kB' 'Shmem: 8025988 kB' 'KReclaimable: 525280 kB' 'Slab: 901328 kB' 'SReclaimable: 525280 kB' 'SUnreclaim: 376048 kB' 'KernelStack: 12880 kB' 'PageTables: 9288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353348 kB' 'Committed_AS: 9806992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196732 kB' 'VmallocChunk: 0 kB' 'Percpu: 44736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1707612 kB' 'DirectMap2M: 20232192 kB' 'DirectMap1G: 30408704 kB' 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.744 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.744 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 21:56:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.745 21:56:29 -- setup/common.sh@33 -- # echo 1024 00:04:47.745 21:56:29 -- setup/common.sh@33 -- # return 0 00:04:47.745 21:56:29 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:47.745 21:56:29 -- setup/hugepages.sh@112 -- # get_nodes 00:04:47.745 21:56:29 -- setup/hugepages.sh@27 -- # local node 00:04:47.745 21:56:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.745 21:56:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:47.745 21:56:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.745 21:56:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:47.745 21:56:29 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:47.745 21:56:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:47.745 21:56:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.745 21:56:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.746 21:56:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:47.746 21:56:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.746 21:56:29 -- setup/common.sh@18 -- # local node=0 00:04:47.746 21:56:29 -- setup/common.sh@19 -- # local var val 00:04:47.746 21:56:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.746 21:56:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.746 21:56:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:47.746 21:56:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:47.746 21:56:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.746 21:56:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 21:56:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 16720668 kB' 'MemUsed: 7898744 kB' 'SwapCached: 0 kB' 'Active: 5829700 kB' 'Inactive: 331748 kB' 'Active(anon): 5418180 kB' 'Inactive(anon): 0 kB' 'Active(file): 411520 kB' 'Inactive(file): 331748 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5732520 kB' 'Mapped: 156220 kB' 'AnonPages: 432308 kB' 'Shmem: 4989252 kB' 'KernelStack: 7288 kB' 'PageTables: 4668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 137292 kB' 'Slab: 316524 kB' 'SReclaimable: 137292 kB' 'SUnreclaim: 179232 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 21:56:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@33 -- # echo 0 00:04:47.747 21:56:29 -- setup/common.sh@33 -- # return 0 00:04:47.747 21:56:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.747 21:56:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.747 21:56:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.747 21:56:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:47.747 21:56:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.747 21:56:29 -- setup/common.sh@18 -- # local node=1 00:04:47.747 21:56:29 -- setup/common.sh@19 -- # local var val 00:04:47.747 21:56:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.747 21:56:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.747 21:56:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:47.747 21:56:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:47.747 21:56:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.747 21:56:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407232 kB' 'MemFree: 9107300 kB' 'MemUsed: 10299932 kB' 'SwapCached: 0 kB' 'Active: 3347868 kB' 'Inactive: 4298348 kB' 'Active(anon): 3189280 kB' 'Inactive(anon): 0 kB' 'Active(file): 158588 kB' 'Inactive(file): 4298348 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7493688 kB' 'Mapped: 61924 kB' 'AnonPages: 152612 kB' 'Shmem: 3036752 kB' 'KernelStack: 5608 kB' 'PageTables: 4844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 387988 kB' 'Slab: 584804 kB' 'SReclaimable: 387988 kB' 'SUnreclaim: 196816 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.747 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 21:56:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.748 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.748 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 21:56:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.748 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.748 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 21:56:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.748 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.748 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 21:56:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.748 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.748 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 21:56:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.748 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.748 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 21:56:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.748 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.748 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 21:56:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.748 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.748 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 21:56:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.748 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.748 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 21:56:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.748 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.748 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 21:56:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.748 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.748 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 21:56:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.748 21:56:29 -- setup/common.sh@32 -- # continue 00:04:47.748 21:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 21:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 21:56:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.748 21:56:29 -- setup/common.sh@33 -- # echo 0 00:04:47.748 21:56:29 -- setup/common.sh@33 -- # return 0 00:04:47.748 21:56:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.748 21:56:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.748 21:56:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.748 21:56:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.748 21:56:29 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:47.748 node0=512 expecting 512 00:04:47.748 21:56:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.748 21:56:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.748 21:56:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.748 21:56:29 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:47.748 node1=512 expecting 512 00:04:47.748 21:56:29 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:47.748 00:04:47.748 real 0m1.669s 00:04:47.748 user 0m0.697s 00:04:47.748 sys 0m0.945s 00:04:47.748 21:56:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:47.748 21:56:29 -- common/autotest_common.sh@10 -- # set +x 00:04:47.748 ************************************ 00:04:47.748 END TEST even_2G_alloc 00:04:47.748 ************************************ 00:04:47.748 21:56:29 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:47.748 21:56:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.748 21:56:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.748 21:56:29 -- common/autotest_common.sh@10 -- # set +x 00:04:48.006 ************************************ 00:04:48.006 START TEST odd_alloc 00:04:48.006 ************************************ 00:04:48.006 21:56:30 -- common/autotest_common.sh@1111 -- # odd_alloc 00:04:48.006 21:56:30 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:48.006 21:56:30 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:48.006 21:56:30 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:48.006 21:56:30 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:48.006 21:56:30 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:48.006 21:56:30 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:48.006 21:56:30 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:48.006 21:56:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:48.006 21:56:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:48.006 21:56:30 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:48.006 21:56:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:48.006 21:56:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:48.006 21:56:30 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:48.006 21:56:30 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:48.006 21:56:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:48.006 21:56:30 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:48.006 21:56:30 -- setup/hugepages.sh@83 -- # : 513 00:04:48.006 21:56:30 -- setup/hugepages.sh@84 -- # : 1 00:04:48.006 21:56:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:48.006 21:56:30 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:48.006 21:56:30 -- setup/hugepages.sh@83 -- # : 0 00:04:48.006 21:56:30 -- setup/hugepages.sh@84 -- # : 0 00:04:48.006 21:56:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:48.006 21:56:30 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:48.006 21:56:30 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:48.006 21:56:30 -- setup/hugepages.sh@160 -- # setup output 00:04:48.006 21:56:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.006 21:56:30 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:48.941 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:48.941 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:48.941 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:48.941 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:48.941 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:48.941 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:48.941 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:48.941 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:48.941 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:48.941 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:49.201 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:49.201 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:49.202 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:49.202 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:49.202 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:49.202 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:49.202 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:49.202 21:56:31 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:49.202 21:56:31 -- setup/hugepages.sh@89 -- # local node 00:04:49.202 21:56:31 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:49.202 21:56:31 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:49.202 21:56:31 -- setup/hugepages.sh@92 -- # local surp 00:04:49.202 21:56:31 -- setup/hugepages.sh@93 -- # local resv 00:04:49.202 21:56:31 -- setup/hugepages.sh@94 -- # local anon 00:04:49.202 21:56:31 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:49.202 21:56:31 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:49.202 21:56:31 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:49.202 21:56:31 -- setup/common.sh@18 -- # local node= 00:04:49.202 21:56:31 -- setup/common.sh@19 -- # local var val 00:04:49.202 21:56:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.202 21:56:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.202 21:56:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.202 21:56:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.202 21:56:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.202 21:56:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026644 kB' 'MemFree: 25811004 kB' 'MemAvailable: 30990164 kB' 'Buffers: 2696 kB' 'Cached: 13223564 kB' 'SwapCached: 0 kB' 'Active: 9169844 kB' 'Inactive: 4630096 kB' 'Active(anon): 8599736 kB' 'Inactive(anon): 0 kB' 'Active(file): 570108 kB' 'Inactive(file): 4630096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576872 kB' 'Mapped: 217208 kB' 'Shmem: 8026056 kB' 'KReclaimable: 525296 kB' 'Slab: 900932 kB' 'SReclaimable: 525296 kB' 'SUnreclaim: 375636 kB' 'KernelStack: 12784 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352324 kB' 'Committed_AS: 9767404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196700 kB' 'VmallocChunk: 0 kB' 'Percpu: 44736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1707612 kB' 'DirectMap2M: 20232192 kB' 'DirectMap1G: 30408704 kB' 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.202 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.202 21:56:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.203 21:56:31 -- setup/common.sh@33 -- # echo 0 00:04:49.203 21:56:31 -- setup/common.sh@33 -- # return 0 00:04:49.203 21:56:31 -- setup/hugepages.sh@97 -- # anon=0 00:04:49.203 21:56:31 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:49.203 21:56:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.203 21:56:31 -- setup/common.sh@18 -- # local node= 00:04:49.203 21:56:31 -- setup/common.sh@19 -- # local var val 00:04:49.203 21:56:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.203 21:56:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.203 21:56:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.203 21:56:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.203 21:56:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.203 21:56:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026644 kB' 'MemFree: 25811876 kB' 'MemAvailable: 30991036 kB' 'Buffers: 2696 kB' 'Cached: 13223568 kB' 'SwapCached: 0 kB' 'Active: 9170108 kB' 'Inactive: 4630096 kB' 'Active(anon): 8600000 kB' 'Inactive(anon): 0 kB' 'Active(file): 570108 kB' 'Inactive(file): 4630096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577180 kB' 'Mapped: 217168 kB' 'Shmem: 8026060 kB' 'KReclaimable: 525296 kB' 'Slab: 900952 kB' 'SReclaimable: 525296 kB' 'SUnreclaim: 375656 kB' 'KernelStack: 12768 kB' 'PageTables: 8572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352324 kB' 'Committed_AS: 9767416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196652 kB' 'VmallocChunk: 0 kB' 'Percpu: 44736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1707612 kB' 'DirectMap2M: 20232192 kB' 'DirectMap1G: 30408704 kB' 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.203 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.203 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.204 21:56:31 -- setup/common.sh@33 -- # echo 0 00:04:49.204 21:56:31 -- setup/common.sh@33 -- # return 0 00:04:49.204 21:56:31 -- setup/hugepages.sh@99 -- # surp=0 00:04:49.204 21:56:31 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:49.204 21:56:31 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:49.204 21:56:31 -- setup/common.sh@18 -- # local node= 00:04:49.204 21:56:31 -- setup/common.sh@19 -- # local var val 00:04:49.204 21:56:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.204 21:56:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.204 21:56:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.204 21:56:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.204 21:56:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.204 21:56:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026644 kB' 'MemFree: 25812368 kB' 'MemAvailable: 30991528 kB' 'Buffers: 2696 kB' 'Cached: 13223576 kB' 'SwapCached: 0 kB' 'Active: 9169620 kB' 'Inactive: 4630096 kB' 'Active(anon): 8599512 kB' 'Inactive(anon): 0 kB' 'Active(file): 570108 kB' 'Inactive(file): 4630096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576692 kB' 'Mapped: 217056 kB' 'Shmem: 8026068 kB' 'KReclaimable: 525296 kB' 'Slab: 900928 kB' 'SReclaimable: 525296 kB' 'SUnreclaim: 375632 kB' 'KernelStack: 12800 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352324 kB' 'Committed_AS: 9767432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196652 kB' 'VmallocChunk: 0 kB' 'Percpu: 44736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1707612 kB' 'DirectMap2M: 20232192 kB' 'DirectMap1G: 30408704 kB' 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.204 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.204 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.205 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.205 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.206 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.206 21:56:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.206 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.206 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.206 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.206 21:56:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.466 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.466 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.466 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.466 21:56:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.466 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.466 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.466 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.466 21:56:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.466 21:56:31 -- setup/common.sh@33 -- # echo 0 00:04:49.466 21:56:31 -- setup/common.sh@33 -- # return 0 00:04:49.466 21:56:31 -- setup/hugepages.sh@100 -- # resv=0 00:04:49.466 21:56:31 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:49.466 nr_hugepages=1025 00:04:49.466 21:56:31 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:49.466 resv_hugepages=0 00:04:49.466 21:56:31 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:49.466 surplus_hugepages=0 00:04:49.466 21:56:31 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:49.466 anon_hugepages=0 00:04:49.466 21:56:31 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:49.466 21:56:31 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:49.466 21:56:31 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:49.466 21:56:31 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:49.466 21:56:31 -- setup/common.sh@18 -- # local node= 00:04:49.466 21:56:31 -- setup/common.sh@19 -- # local var val 00:04:49.466 21:56:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.466 21:56:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.466 21:56:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.466 21:56:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.466 21:56:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.466 21:56:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.466 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.466 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.466 21:56:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026644 kB' 'MemFree: 25814276 kB' 'MemAvailable: 30993436 kB' 'Buffers: 2696 kB' 'Cached: 13223592 kB' 'SwapCached: 0 kB' 'Active: 9169624 kB' 'Inactive: 4630096 kB' 'Active(anon): 8599516 kB' 'Inactive(anon): 0 kB' 'Active(file): 570108 kB' 'Inactive(file): 4630096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576692 kB' 'Mapped: 217056 kB' 'Shmem: 8026084 kB' 'KReclaimable: 525296 kB' 'Slab: 900928 kB' 'SReclaimable: 525296 kB' 'SUnreclaim: 375632 kB' 'KernelStack: 12800 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352324 kB' 'Committed_AS: 9767444 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196652 kB' 'VmallocChunk: 0 kB' 'Percpu: 44736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1707612 kB' 'DirectMap2M: 20232192 kB' 'DirectMap1G: 30408704 kB' 00:04:49.466 21:56:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.466 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.466 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.466 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.466 21:56:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.466 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.466 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.466 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.466 21:56:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.466 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.466 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.466 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.466 21:56:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.466 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.466 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.466 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.466 21:56:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.466 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.466 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.466 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.466 21:56:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.466 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.466 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.466 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.466 21:56:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.466 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.466 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.466 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.466 21:56:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.466 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.466 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.466 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.466 21:56:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.466 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.466 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.466 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.466 21:56:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.466 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.466 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.466 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.467 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.467 21:56:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.467 21:56:31 -- setup/common.sh@33 -- # echo 1025 00:04:49.467 21:56:31 -- setup/common.sh@33 -- # return 0 00:04:49.467 21:56:31 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:49.467 21:56:31 -- setup/hugepages.sh@112 -- # get_nodes 00:04:49.467 21:56:31 -- setup/hugepages.sh@27 -- # local node 00:04:49.467 21:56:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.467 21:56:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:49.468 21:56:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.468 21:56:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:49.468 21:56:31 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:49.468 21:56:31 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:49.468 21:56:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:49.468 21:56:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:49.468 21:56:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:49.468 21:56:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.468 21:56:31 -- setup/common.sh@18 -- # local node=0 00:04:49.468 21:56:31 -- setup/common.sh@19 -- # local var val 00:04:49.468 21:56:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.468 21:56:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.468 21:56:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:49.468 21:56:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:49.468 21:56:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.468 21:56:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 16705416 kB' 'MemUsed: 7913996 kB' 'SwapCached: 0 kB' 'Active: 5825432 kB' 'Inactive: 331748 kB' 'Active(anon): 5413912 kB' 'Inactive(anon): 0 kB' 'Active(file): 411520 kB' 'Inactive(file): 331748 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5732600 kB' 'Mapped: 155096 kB' 'AnonPages: 427744 kB' 'Shmem: 4989332 kB' 'KernelStack: 7256 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 137292 kB' 'Slab: 316276 kB' 'SReclaimable: 137292 kB' 'SUnreclaim: 178984 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.468 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.468 21:56:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@33 -- # echo 0 00:04:49.469 21:56:31 -- setup/common.sh@33 -- # return 0 00:04:49.469 21:56:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.469 21:56:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:49.469 21:56:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:49.469 21:56:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:49.469 21:56:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.469 21:56:31 -- setup/common.sh@18 -- # local node=1 00:04:49.469 21:56:31 -- setup/common.sh@19 -- # local var val 00:04:49.469 21:56:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.469 21:56:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.469 21:56:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:49.469 21:56:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:49.469 21:56:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.469 21:56:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407232 kB' 'MemFree: 9110024 kB' 'MemUsed: 10297208 kB' 'SwapCached: 0 kB' 'Active: 3344228 kB' 'Inactive: 4298348 kB' 'Active(anon): 3185640 kB' 'Inactive(anon): 0 kB' 'Active(file): 158588 kB' 'Inactive(file): 4298348 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7493704 kB' 'Mapped: 61960 kB' 'AnonPages: 148948 kB' 'Shmem: 3036768 kB' 'KernelStack: 5544 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 388004 kB' 'Slab: 584652 kB' 'SReclaimable: 388004 kB' 'SUnreclaim: 196648 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.469 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.469 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.470 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.470 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.470 21:56:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.470 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.470 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.470 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.470 21:56:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.470 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.470 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.470 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.470 21:56:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.470 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.470 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.470 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.470 21:56:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.470 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.470 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.470 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.470 21:56:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.470 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.470 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.470 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.470 21:56:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.470 21:56:31 -- setup/common.sh@32 -- # continue 00:04:49.470 21:56:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.470 21:56:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.470 21:56:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.470 21:56:31 -- setup/common.sh@33 -- # echo 0 00:04:49.470 21:56:31 -- setup/common.sh@33 -- # return 0 00:04:49.470 21:56:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.470 21:56:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.470 21:56:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.470 21:56:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.470 21:56:31 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:49.470 node0=512 expecting 513 00:04:49.470 21:56:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.470 21:56:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.470 21:56:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.470 21:56:31 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:49.470 node1=513 expecting 512 00:04:49.470 21:56:31 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:49.470 00:04:49.470 real 0m1.521s 00:04:49.470 user 0m0.645s 00:04:49.470 sys 0m0.849s 00:04:49.470 21:56:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:49.470 21:56:31 -- common/autotest_common.sh@10 -- # set +x 00:04:49.470 ************************************ 00:04:49.470 END TEST odd_alloc 00:04:49.470 ************************************ 00:04:49.470 21:56:31 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:49.470 21:56:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.470 21:56:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.470 21:56:31 -- common/autotest_common.sh@10 -- # set +x 00:04:49.470 ************************************ 00:04:49.470 START TEST custom_alloc 00:04:49.470 ************************************ 00:04:49.470 21:56:31 -- common/autotest_common.sh@1111 -- # custom_alloc 00:04:49.470 21:56:31 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:49.470 21:56:31 -- setup/hugepages.sh@169 -- # local node 00:04:49.470 21:56:31 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:49.470 21:56:31 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:49.470 21:56:31 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:49.470 21:56:31 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:49.470 21:56:31 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:49.470 21:56:31 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:49.470 21:56:31 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:49.470 21:56:31 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:49.470 21:56:31 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:49.470 21:56:31 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:49.470 21:56:31 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:49.470 21:56:31 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:49.470 21:56:31 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:49.470 21:56:31 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:49.470 21:56:31 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:49.470 21:56:31 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:49.470 21:56:31 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:49.470 21:56:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:49.470 21:56:31 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:49.470 21:56:31 -- setup/hugepages.sh@83 -- # : 256 00:04:49.470 21:56:31 -- setup/hugepages.sh@84 -- # : 1 00:04:49.470 21:56:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:49.470 21:56:31 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:49.470 21:56:31 -- setup/hugepages.sh@83 -- # : 0 00:04:49.470 21:56:31 -- setup/hugepages.sh@84 -- # : 0 00:04:49.470 21:56:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:49.470 21:56:31 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:49.470 21:56:31 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:49.470 21:56:31 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:49.470 21:56:31 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:49.470 21:56:31 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:49.470 21:56:31 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:49.470 21:56:31 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:49.470 21:56:31 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:49.470 21:56:31 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:49.470 21:56:31 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:49.470 21:56:31 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:49.470 21:56:31 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:49.470 21:56:31 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:49.470 21:56:31 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:49.470 21:56:31 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:49.470 21:56:31 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:49.470 21:56:31 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:49.470 21:56:31 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:49.470 21:56:31 -- setup/hugepages.sh@78 -- # return 0 00:04:49.470 21:56:31 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:49.470 21:56:31 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:49.470 21:56:31 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:49.470 21:56:31 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:49.470 21:56:31 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:49.470 21:56:31 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:49.470 21:56:31 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:49.470 21:56:31 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:49.470 21:56:31 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:49.470 21:56:31 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:49.470 21:56:31 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:49.470 21:56:31 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:49.470 21:56:31 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:49.470 21:56:31 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:49.470 21:56:31 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:49.470 21:56:31 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:49.470 21:56:31 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:49.470 21:56:31 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:49.470 21:56:31 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:49.470 21:56:31 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:49.470 21:56:31 -- setup/hugepages.sh@78 -- # return 0 00:04:49.470 21:56:31 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:49.470 21:56:31 -- setup/hugepages.sh@187 -- # setup output 00:04:49.470 21:56:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.470 21:56:31 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:50.844 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:50.844 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:50.844 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:50.844 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:50.844 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:50.844 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:50.844 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:50.844 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:50.844 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:50.844 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:50.844 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:50.844 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:50.844 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:50.844 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:50.844 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:50.844 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:50.844 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:51.106 21:56:33 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:51.106 21:56:33 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:51.106 21:56:33 -- setup/hugepages.sh@89 -- # local node 00:04:51.106 21:56:33 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:51.106 21:56:33 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:51.106 21:56:33 -- setup/hugepages.sh@92 -- # local surp 00:04:51.106 21:56:33 -- setup/hugepages.sh@93 -- # local resv 00:04:51.106 21:56:33 -- setup/hugepages.sh@94 -- # local anon 00:04:51.106 21:56:33 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:51.106 21:56:33 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:51.106 21:56:33 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:51.107 21:56:33 -- setup/common.sh@18 -- # local node= 00:04:51.107 21:56:33 -- setup/common.sh@19 -- # local var val 00:04:51.107 21:56:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.107 21:56:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.107 21:56:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.107 21:56:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.107 21:56:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.107 21:56:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026644 kB' 'MemFree: 24761900 kB' 'MemAvailable: 29941060 kB' 'Buffers: 2696 kB' 'Cached: 13231852 kB' 'SwapCached: 0 kB' 'Active: 9182104 kB' 'Inactive: 4630096 kB' 'Active(anon): 8611996 kB' 'Inactive(anon): 0 kB' 'Active(file): 570108 kB' 'Inactive(file): 4630096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580756 kB' 'Mapped: 217652 kB' 'Shmem: 8034344 kB' 'KReclaimable: 525296 kB' 'Slab: 900868 kB' 'SReclaimable: 525296 kB' 'SUnreclaim: 375572 kB' 'KernelStack: 12944 kB' 'PageTables: 9492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829060 kB' 'Committed_AS: 9779980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196892 kB' 'VmallocChunk: 0 kB' 'Percpu: 44736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1707612 kB' 'DirectMap2M: 20232192 kB' 'DirectMap1G: 30408704 kB' 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.107 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.107 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.108 21:56:33 -- setup/common.sh@33 -- # echo 0 00:04:51.108 21:56:33 -- setup/common.sh@33 -- # return 0 00:04:51.108 21:56:33 -- setup/hugepages.sh@97 -- # anon=0 00:04:51.108 21:56:33 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:51.108 21:56:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.108 21:56:33 -- setup/common.sh@18 -- # local node= 00:04:51.108 21:56:33 -- setup/common.sh@19 -- # local var val 00:04:51.108 21:56:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.108 21:56:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.108 21:56:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.108 21:56:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.108 21:56:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.108 21:56:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026644 kB' 'MemFree: 24760940 kB' 'MemAvailable: 29940100 kB' 'Buffers: 2696 kB' 'Cached: 13231856 kB' 'SwapCached: 0 kB' 'Active: 9184300 kB' 'Inactive: 4630096 kB' 'Active(anon): 8614192 kB' 'Inactive(anon): 0 kB' 'Active(file): 570108 kB' 'Inactive(file): 4630096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582980 kB' 'Mapped: 217668 kB' 'Shmem: 8034348 kB' 'KReclaimable: 525296 kB' 'Slab: 901036 kB' 'SReclaimable: 525296 kB' 'SUnreclaim: 375740 kB' 'KernelStack: 13072 kB' 'PageTables: 9896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829060 kB' 'Committed_AS: 9782512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196912 kB' 'VmallocChunk: 0 kB' 'Percpu: 44736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1707612 kB' 'DirectMap2M: 20232192 kB' 'DirectMap1G: 30408704 kB' 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.108 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.108 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.109 21:56:33 -- setup/common.sh@33 -- # echo 0 00:04:51.109 21:56:33 -- setup/common.sh@33 -- # return 0 00:04:51.109 21:56:33 -- setup/hugepages.sh@99 -- # surp=0 00:04:51.109 21:56:33 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:51.109 21:56:33 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:51.109 21:56:33 -- setup/common.sh@18 -- # local node= 00:04:51.109 21:56:33 -- setup/common.sh@19 -- # local var val 00:04:51.109 21:56:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.109 21:56:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.109 21:56:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.109 21:56:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.109 21:56:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.109 21:56:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026644 kB' 'MemFree: 24759532 kB' 'MemAvailable: 29938692 kB' 'Buffers: 2696 kB' 'Cached: 13231856 kB' 'SwapCached: 0 kB' 'Active: 9184140 kB' 'Inactive: 4630096 kB' 'Active(anon): 8614032 kB' 'Inactive(anon): 0 kB' 'Active(file): 570108 kB' 'Inactive(file): 4630096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582824 kB' 'Mapped: 218016 kB' 'Shmem: 8034348 kB' 'KReclaimable: 525296 kB' 'Slab: 901012 kB' 'SReclaimable: 525296 kB' 'SUnreclaim: 375716 kB' 'KernelStack: 12864 kB' 'PageTables: 9028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829060 kB' 'Committed_AS: 9790572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196912 kB' 'VmallocChunk: 0 kB' 'Percpu: 44736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1707612 kB' 'DirectMap2M: 20232192 kB' 'DirectMap1G: 30408704 kB' 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.109 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.109 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.110 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.110 21:56:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.110 21:56:33 -- setup/common.sh@33 -- # echo 0 00:04:51.111 21:56:33 -- setup/common.sh@33 -- # return 0 00:04:51.111 21:56:33 -- setup/hugepages.sh@100 -- # resv=0 00:04:51.111 21:56:33 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:51.111 nr_hugepages=1536 00:04:51.111 21:56:33 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:51.111 resv_hugepages=0 00:04:51.111 21:56:33 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:51.111 surplus_hugepages=0 00:04:51.111 21:56:33 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:51.111 anon_hugepages=0 00:04:51.111 21:56:33 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:51.111 21:56:33 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:51.111 21:56:33 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:51.111 21:56:33 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:51.111 21:56:33 -- setup/common.sh@18 -- # local node= 00:04:51.111 21:56:33 -- setup/common.sh@19 -- # local var val 00:04:51.111 21:56:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.111 21:56:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.111 21:56:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.111 21:56:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.111 21:56:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.111 21:56:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.111 21:56:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026644 kB' 'MemFree: 24760108 kB' 'MemAvailable: 29939268 kB' 'Buffers: 2696 kB' 'Cached: 13231868 kB' 'SwapCached: 0 kB' 'Active: 9180524 kB' 'Inactive: 4630096 kB' 'Active(anon): 8610416 kB' 'Inactive(anon): 0 kB' 'Active(file): 570108 kB' 'Inactive(file): 4630096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579404 kB' 'Mapped: 217568 kB' 'Shmem: 8034360 kB' 'KReclaimable: 525296 kB' 'Slab: 901012 kB' 'SReclaimable: 525296 kB' 'SUnreclaim: 375716 kB' 'KernelStack: 12816 kB' 'PageTables: 8704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829060 kB' 'Committed_AS: 9778756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196588 kB' 'VmallocChunk: 0 kB' 'Percpu: 44736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1707612 kB' 'DirectMap2M: 20232192 kB' 'DirectMap1G: 30408704 kB' 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.111 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.111 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.112 21:56:33 -- setup/common.sh@33 -- # echo 1536 00:04:51.112 21:56:33 -- setup/common.sh@33 -- # return 0 00:04:51.112 21:56:33 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:51.112 21:56:33 -- setup/hugepages.sh@112 -- # get_nodes 00:04:51.112 21:56:33 -- setup/hugepages.sh@27 -- # local node 00:04:51.112 21:56:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.112 21:56:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:51.112 21:56:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.112 21:56:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:51.112 21:56:33 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:51.112 21:56:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:51.112 21:56:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:51.112 21:56:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:51.112 21:56:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:51.112 21:56:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.112 21:56:33 -- setup/common.sh@18 -- # local node=0 00:04:51.112 21:56:33 -- setup/common.sh@19 -- # local var val 00:04:51.112 21:56:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.112 21:56:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.112 21:56:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:51.112 21:56:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:51.112 21:56:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.112 21:56:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 16702352 kB' 'MemUsed: 7917060 kB' 'SwapCached: 0 kB' 'Active: 5831672 kB' 'Inactive: 331748 kB' 'Active(anon): 5420152 kB' 'Inactive(anon): 0 kB' 'Active(file): 411520 kB' 'Inactive(file): 331748 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5732632 kB' 'Mapped: 155104 kB' 'AnonPages: 433948 kB' 'Shmem: 4989364 kB' 'KernelStack: 7256 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 137292 kB' 'Slab: 316172 kB' 'SReclaimable: 137292 kB' 'SUnreclaim: 178880 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.112 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.112 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@33 -- # echo 0 00:04:51.113 21:56:33 -- setup/common.sh@33 -- # return 0 00:04:51.113 21:56:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:51.113 21:56:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:51.113 21:56:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:51.113 21:56:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:51.113 21:56:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.113 21:56:33 -- setup/common.sh@18 -- # local node=1 00:04:51.113 21:56:33 -- setup/common.sh@19 -- # local var val 00:04:51.113 21:56:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.113 21:56:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.113 21:56:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:51.113 21:56:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:51.113 21:56:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.113 21:56:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407232 kB' 'MemFree: 8057756 kB' 'MemUsed: 11349476 kB' 'SwapCached: 0 kB' 'Active: 3351628 kB' 'Inactive: 4298348 kB' 'Active(anon): 3193040 kB' 'Inactive(anon): 0 kB' 'Active(file): 158588 kB' 'Inactive(file): 4298348 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7501976 kB' 'Mapped: 62016 kB' 'AnonPages: 148168 kB' 'Shmem: 3045040 kB' 'KernelStack: 5528 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 388004 kB' 'Slab: 584840 kB' 'SReclaimable: 388004 kB' 'SUnreclaim: 196836 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.113 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.113 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # continue 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.114 21:56:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.114 21:56:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.114 21:56:33 -- setup/common.sh@33 -- # echo 0 00:04:51.114 21:56:33 -- setup/common.sh@33 -- # return 0 00:04:51.114 21:56:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:51.114 21:56:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:51.114 21:56:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:51.114 21:56:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:51.114 21:56:33 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:51.114 node0=512 expecting 512 00:04:51.114 21:56:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:51.114 21:56:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:51.114 21:56:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:51.114 21:56:33 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:51.114 node1=1024 expecting 1024 00:04:51.114 21:56:33 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:51.114 00:04:51.114 real 0m1.634s 00:04:51.114 user 0m0.653s 00:04:51.114 sys 0m0.957s 00:04:51.114 21:56:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:51.114 21:56:33 -- common/autotest_common.sh@10 -- # set +x 00:04:51.114 ************************************ 00:04:51.114 END TEST custom_alloc 00:04:51.114 ************************************ 00:04:51.114 21:56:33 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:51.114 21:56:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.114 21:56:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.114 21:56:33 -- common/autotest_common.sh@10 -- # set +x 00:04:51.373 ************************************ 00:04:51.373 START TEST no_shrink_alloc 00:04:51.373 ************************************ 00:04:51.373 21:56:33 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:04:51.373 21:56:33 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:51.373 21:56:33 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:51.373 21:56:33 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:51.373 21:56:33 -- setup/hugepages.sh@51 -- # shift 00:04:51.373 21:56:33 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:51.373 21:56:33 -- setup/hugepages.sh@52 -- # local node_ids 00:04:51.373 21:56:33 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:51.373 21:56:33 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:51.373 21:56:33 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:51.373 21:56:33 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:51.373 21:56:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:51.373 21:56:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:51.373 21:56:33 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:51.373 21:56:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:51.373 21:56:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:51.373 21:56:33 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:51.373 21:56:33 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:51.373 21:56:33 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:51.373 21:56:33 -- setup/hugepages.sh@73 -- # return 0 00:04:51.373 21:56:33 -- setup/hugepages.sh@198 -- # setup output 00:04:51.373 21:56:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.373 21:56:33 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:52.749 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:52.749 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:52.749 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:52.749 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:52.749 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:52.749 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:52.749 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:52.749 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:52.749 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:52.749 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:52.749 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:52.749 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:52.749 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:52.749 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:52.749 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:52.749 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:52.749 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:52.749 21:56:34 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:52.749 21:56:34 -- setup/hugepages.sh@89 -- # local node 00:04:52.749 21:56:34 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:52.749 21:56:34 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:52.749 21:56:34 -- setup/hugepages.sh@92 -- # local surp 00:04:52.749 21:56:34 -- setup/hugepages.sh@93 -- # local resv 00:04:52.749 21:56:34 -- setup/hugepages.sh@94 -- # local anon 00:04:52.749 21:56:34 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:52.749 21:56:34 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:52.749 21:56:34 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:52.749 21:56:34 -- setup/common.sh@18 -- # local node= 00:04:52.749 21:56:34 -- setup/common.sh@19 -- # local var val 00:04:52.749 21:56:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.749 21:56:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.749 21:56:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.749 21:56:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.749 21:56:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.749 21:56:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026644 kB' 'MemFree: 25788628 kB' 'MemAvailable: 30967788 kB' 'Buffers: 2696 kB' 'Cached: 13231956 kB' 'SwapCached: 0 kB' 'Active: 9178080 kB' 'Inactive: 4630096 kB' 'Active(anon): 8607972 kB' 'Inactive(anon): 0 kB' 'Active(file): 570108 kB' 'Inactive(file): 4630096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576876 kB' 'Mapped: 217120 kB' 'Shmem: 8034448 kB' 'KReclaimable: 525296 kB' 'Slab: 900680 kB' 'SReclaimable: 525296 kB' 'SUnreclaim: 375384 kB' 'KernelStack: 12832 kB' 'PageTables: 8716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353348 kB' 'Committed_AS: 9775888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196652 kB' 'VmallocChunk: 0 kB' 'Percpu: 44736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1707612 kB' 'DirectMap2M: 20232192 kB' 'DirectMap1G: 30408704 kB' 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.749 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.749 21:56:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.749 21:56:34 -- setup/common.sh@33 -- # echo 0 00:04:52.749 21:56:34 -- setup/common.sh@33 -- # return 0 00:04:52.749 21:56:34 -- setup/hugepages.sh@97 -- # anon=0 00:04:52.749 21:56:34 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:52.749 21:56:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.749 21:56:34 -- setup/common.sh@18 -- # local node= 00:04:52.749 21:56:34 -- setup/common.sh@19 -- # local var val 00:04:52.749 21:56:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.749 21:56:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.750 21:56:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.750 21:56:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.750 21:56:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.750 21:56:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026644 kB' 'MemFree: 25788928 kB' 'MemAvailable: 30968088 kB' 'Buffers: 2696 kB' 'Cached: 13231960 kB' 'SwapCached: 0 kB' 'Active: 9178172 kB' 'Inactive: 4630096 kB' 'Active(anon): 8608064 kB' 'Inactive(anon): 0 kB' 'Active(file): 570108 kB' 'Inactive(file): 4630096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576952 kB' 'Mapped: 217180 kB' 'Shmem: 8034452 kB' 'KReclaimable: 525296 kB' 'Slab: 900680 kB' 'SReclaimable: 525296 kB' 'SUnreclaim: 375384 kB' 'KernelStack: 12784 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353348 kB' 'Committed_AS: 9775900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196636 kB' 'VmallocChunk: 0 kB' 'Percpu: 44736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1707612 kB' 'DirectMap2M: 20232192 kB' 'DirectMap1G: 30408704 kB' 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.750 21:56:34 -- setup/common.sh@33 -- # echo 0 00:04:52.750 21:56:34 -- setup/common.sh@33 -- # return 0 00:04:52.750 21:56:34 -- setup/hugepages.sh@99 -- # surp=0 00:04:52.750 21:56:34 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:52.750 21:56:34 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:52.750 21:56:34 -- setup/common.sh@18 -- # local node= 00:04:52.750 21:56:34 -- setup/common.sh@19 -- # local var val 00:04:52.750 21:56:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.750 21:56:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.750 21:56:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.750 21:56:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.750 21:56:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.750 21:56:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026644 kB' 'MemFree: 25788928 kB' 'MemAvailable: 30968088 kB' 'Buffers: 2696 kB' 'Cached: 13231972 kB' 'SwapCached: 0 kB' 'Active: 9178184 kB' 'Inactive: 4630096 kB' 'Active(anon): 8608076 kB' 'Inactive(anon): 0 kB' 'Active(file): 570108 kB' 'Inactive(file): 4630096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576900 kB' 'Mapped: 217116 kB' 'Shmem: 8034464 kB' 'KReclaimable: 525296 kB' 'Slab: 900672 kB' 'SReclaimable: 525296 kB' 'SUnreclaim: 375376 kB' 'KernelStack: 12816 kB' 'PageTables: 8676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353348 kB' 'Committed_AS: 9775916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196652 kB' 'VmallocChunk: 0 kB' 'Percpu: 44736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1707612 kB' 'DirectMap2M: 20232192 kB' 'DirectMap1G: 30408704 kB' 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.750 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.750 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.751 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.751 21:56:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.751 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.751 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.751 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.751 21:56:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.751 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.751 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.751 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.751 21:56:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.751 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.751 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.751 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.751 21:56:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.751 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.751 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.751 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.751 21:56:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.751 21:56:34 -- setup/common.sh@32 -- # continue 00:04:52.751 21:56:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.751 21:56:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.751 21:56:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.751 21:56:35 -- setup/common.sh@32 -- # continue 00:04:52.751 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.751 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.751 21:56:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.751 21:56:35 -- setup/common.sh@32 -- # continue 00:04:52.751 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.751 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.751 21:56:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.751 21:56:35 -- setup/common.sh@32 -- # continue 00:04:52.751 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.751 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.751 21:56:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.751 21:56:35 -- setup/common.sh@32 -- # continue 00:04:52.751 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.751 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.751 21:56:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.751 21:56:35 -- setup/common.sh@32 -- # continue 00:04:52.751 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.751 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.751 21:56:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.011 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.011 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.012 21:56:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.012 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.012 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.012 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.012 21:56:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.012 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.012 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.012 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.012 21:56:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.012 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.012 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.012 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.012 21:56:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.012 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.012 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.012 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.012 21:56:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.012 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.012 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.012 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.012 21:56:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.012 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.012 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.012 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.012 21:56:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.012 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.012 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.012 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.012 21:56:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.012 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.012 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.012 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.012 21:56:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.012 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.012 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.012 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.012 21:56:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.012 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.012 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.012 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.012 21:56:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.012 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.012 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.012 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.012 21:56:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.012 21:56:35 -- setup/common.sh@33 -- # echo 0 00:04:53.012 21:56:35 -- setup/common.sh@33 -- # return 0 00:04:53.012 21:56:35 -- setup/hugepages.sh@100 -- # resv=0 00:04:53.012 21:56:35 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:53.012 nr_hugepages=1024 00:04:53.012 21:56:35 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:53.012 resv_hugepages=0 00:04:53.012 21:56:35 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:53.012 surplus_hugepages=0 00:04:53.012 21:56:35 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:53.012 anon_hugepages=0 00:04:53.012 21:56:35 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:53.012 21:56:35 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:53.012 21:56:35 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:53.012 21:56:35 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:53.012 21:56:35 -- setup/common.sh@18 -- # local node= 00:04:53.012 21:56:35 -- setup/common.sh@19 -- # local var val 00:04:53.012 21:56:35 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.012 21:56:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.012 21:56:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.012 21:56:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.012 21:56:35 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.012 21:56:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.012 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.012 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.012 21:56:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026644 kB' 'MemFree: 25789352 kB' 'MemAvailable: 30968512 kB' 'Buffers: 2696 kB' 'Cached: 13231984 kB' 'SwapCached: 0 kB' 'Active: 9178216 kB' 'Inactive: 4630096 kB' 'Active(anon): 8608108 kB' 'Inactive(anon): 0 kB' 'Active(file): 570108 kB' 'Inactive(file): 4630096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576888 kB' 'Mapped: 217116 kB' 'Shmem: 8034476 kB' 'KReclaimable: 525296 kB' 'Slab: 900672 kB' 'SReclaimable: 525296 kB' 'SUnreclaim: 375376 kB' 'KernelStack: 12816 kB' 'PageTables: 8676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353348 kB' 'Committed_AS: 9775928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196668 kB' 'VmallocChunk: 0 kB' 'Percpu: 44736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1707612 kB' 'DirectMap2M: 20232192 kB' 'DirectMap1G: 30408704 kB' 00:04:53.012 21:56:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.012 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.012 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.012 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.012 21:56:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.012 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.012 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.012 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.012 21:56:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.013 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.013 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.014 21:56:35 -- setup/common.sh@33 -- # echo 1024 00:04:53.014 21:56:35 -- setup/common.sh@33 -- # return 0 00:04:53.014 21:56:35 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:53.014 21:56:35 -- setup/hugepages.sh@112 -- # get_nodes 00:04:53.014 21:56:35 -- setup/hugepages.sh@27 -- # local node 00:04:53.014 21:56:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.014 21:56:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:53.014 21:56:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.014 21:56:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:53.014 21:56:35 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:53.014 21:56:35 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:53.014 21:56:35 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:53.014 21:56:35 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:53.014 21:56:35 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:53.014 21:56:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.014 21:56:35 -- setup/common.sh@18 -- # local node=0 00:04:53.014 21:56:35 -- setup/common.sh@19 -- # local var val 00:04:53.014 21:56:35 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.014 21:56:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.014 21:56:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:53.014 21:56:35 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:53.014 21:56:35 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.014 21:56:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.014 21:56:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 15640580 kB' 'MemUsed: 8978832 kB' 'SwapCached: 0 kB' 'Active: 5826728 kB' 'Inactive: 331748 kB' 'Active(anon): 5415208 kB' 'Inactive(anon): 0 kB' 'Active(file): 411520 kB' 'Inactive(file): 331748 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5732644 kB' 'Mapped: 155104 kB' 'AnonPages: 429004 kB' 'Shmem: 4989376 kB' 'KernelStack: 7256 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 137292 kB' 'Slab: 316072 kB' 'SReclaimable: 137292 kB' 'SUnreclaim: 178780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.014 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.014 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # continue 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.015 21:56:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.015 21:56:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.015 21:56:35 -- setup/common.sh@33 -- # echo 0 00:04:53.015 21:56:35 -- setup/common.sh@33 -- # return 0 00:04:53.015 21:56:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:53.015 21:56:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:53.015 21:56:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:53.015 21:56:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:53.015 21:56:35 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:53.015 node0=1024 expecting 1024 00:04:53.015 21:56:35 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:53.015 21:56:35 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:53.016 21:56:35 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:53.016 21:56:35 -- setup/hugepages.sh@202 -- # setup output 00:04:53.016 21:56:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.016 21:56:35 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:54.391 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:54.391 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:54.391 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:54.391 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:54.391 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:54.391 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:54.391 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:54.391 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:54.391 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:54.391 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:54.391 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:54.391 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:54.391 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:54.391 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:54.391 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:54.391 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:54.391 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:54.391 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:54.391 21:56:36 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:54.391 21:56:36 -- setup/hugepages.sh@89 -- # local node 00:04:54.391 21:56:36 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:54.391 21:56:36 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:54.391 21:56:36 -- setup/hugepages.sh@92 -- # local surp 00:04:54.391 21:56:36 -- setup/hugepages.sh@93 -- # local resv 00:04:54.391 21:56:36 -- setup/hugepages.sh@94 -- # local anon 00:04:54.391 21:56:36 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:54.391 21:56:36 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:54.391 21:56:36 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:54.391 21:56:36 -- setup/common.sh@18 -- # local node= 00:04:54.391 21:56:36 -- setup/common.sh@19 -- # local var val 00:04:54.391 21:56:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.391 21:56:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.391 21:56:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.391 21:56:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.391 21:56:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.391 21:56:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.391 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.391 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.391 21:56:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026644 kB' 'MemFree: 25816408 kB' 'MemAvailable: 30995568 kB' 'Buffers: 2696 kB' 'Cached: 13232032 kB' 'SwapCached: 0 kB' 'Active: 9180172 kB' 'Inactive: 4630096 kB' 'Active(anon): 8610064 kB' 'Inactive(anon): 0 kB' 'Active(file): 570108 kB' 'Inactive(file): 4630096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578688 kB' 'Mapped: 217332 kB' 'Shmem: 8034524 kB' 'KReclaimable: 525296 kB' 'Slab: 900780 kB' 'SReclaimable: 525296 kB' 'SUnreclaim: 375484 kB' 'KernelStack: 13312 kB' 'PageTables: 10196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353348 kB' 'Committed_AS: 9778684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196876 kB' 'VmallocChunk: 0 kB' 'Percpu: 44736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1707612 kB' 'DirectMap2M: 20232192 kB' 'DirectMap1G: 30408704 kB' 00:04:54.391 21:56:36 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.391 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.391 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.392 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.392 21:56:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.392 21:56:36 -- setup/common.sh@33 -- # echo 0 00:04:54.392 21:56:36 -- setup/common.sh@33 -- # return 0 00:04:54.392 21:56:36 -- setup/hugepages.sh@97 -- # anon=0 00:04:54.392 21:56:36 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:54.392 21:56:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.393 21:56:36 -- setup/common.sh@18 -- # local node= 00:04:54.393 21:56:36 -- setup/common.sh@19 -- # local var val 00:04:54.393 21:56:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.393 21:56:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.393 21:56:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.393 21:56:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.393 21:56:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.393 21:56:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026644 kB' 'MemFree: 25818460 kB' 'MemAvailable: 30997620 kB' 'Buffers: 2696 kB' 'Cached: 13232032 kB' 'SwapCached: 0 kB' 'Active: 9180512 kB' 'Inactive: 4630096 kB' 'Active(anon): 8610404 kB' 'Inactive(anon): 0 kB' 'Active(file): 570108 kB' 'Inactive(file): 4630096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579092 kB' 'Mapped: 217128 kB' 'Shmem: 8034524 kB' 'KReclaimable: 525296 kB' 'Slab: 900764 kB' 'SReclaimable: 525296 kB' 'SUnreclaim: 375468 kB' 'KernelStack: 13040 kB' 'PageTables: 9292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353348 kB' 'Committed_AS: 9787036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196796 kB' 'VmallocChunk: 0 kB' 'Percpu: 44736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1707612 kB' 'DirectMap2M: 20232192 kB' 'DirectMap1G: 30408704 kB' 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.393 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.393 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.394 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.394 21:56:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.394 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.394 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.394 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.394 21:56:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.394 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.394 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.394 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.394 21:56:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.394 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.394 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.394 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.394 21:56:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.394 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.394 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.394 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.394 21:56:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.394 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.394 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.394 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.394 21:56:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.394 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.394 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.394 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.655 21:56:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.655 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.655 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.655 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.655 21:56:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.655 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.655 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.655 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.655 21:56:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.655 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.655 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.655 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.655 21:56:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.655 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.655 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.655 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.655 21:56:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.655 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.655 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.655 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.655 21:56:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.655 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.655 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.655 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.655 21:56:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.655 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.655 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.655 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.655 21:56:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.655 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.655 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.655 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.655 21:56:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.655 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.655 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.655 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.655 21:56:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.655 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.655 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.655 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.655 21:56:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.655 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.655 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.655 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.655 21:56:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.655 21:56:36 -- setup/common.sh@33 -- # echo 0 00:04:54.655 21:56:36 -- setup/common.sh@33 -- # return 0 00:04:54.655 21:56:36 -- setup/hugepages.sh@99 -- # surp=0 00:04:54.655 21:56:36 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:54.655 21:56:36 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:54.655 21:56:36 -- setup/common.sh@18 -- # local node= 00:04:54.655 21:56:36 -- setup/common.sh@19 -- # local var val 00:04:54.655 21:56:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.655 21:56:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.655 21:56:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.655 21:56:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.655 21:56:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.655 21:56:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.655 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.655 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.655 21:56:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026644 kB' 'MemFree: 25817300 kB' 'MemAvailable: 30996460 kB' 'Buffers: 2696 kB' 'Cached: 13232036 kB' 'SwapCached: 0 kB' 'Active: 9180620 kB' 'Inactive: 4630096 kB' 'Active(anon): 8610512 kB' 'Inactive(anon): 0 kB' 'Active(file): 570108 kB' 'Inactive(file): 4630096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579232 kB' 'Mapped: 217564 kB' 'Shmem: 8034528 kB' 'KReclaimable: 525296 kB' 'Slab: 900828 kB' 'SReclaimable: 525296 kB' 'SUnreclaim: 375532 kB' 'KernelStack: 13104 kB' 'PageTables: 9992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353348 kB' 'Committed_AS: 9777672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196828 kB' 'VmallocChunk: 0 kB' 'Percpu: 44736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1707612 kB' 'DirectMap2M: 20232192 kB' 'DirectMap1G: 30408704 kB' 00:04:54.655 21:56:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.655 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.655 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.655 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.655 21:56:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.655 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.655 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.655 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.656 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.656 21:56:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.657 21:56:36 -- setup/common.sh@33 -- # echo 0 00:04:54.657 21:56:36 -- setup/common.sh@33 -- # return 0 00:04:54.657 21:56:36 -- setup/hugepages.sh@100 -- # resv=0 00:04:54.657 21:56:36 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:54.657 nr_hugepages=1024 00:04:54.657 21:56:36 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:54.657 resv_hugepages=0 00:04:54.657 21:56:36 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:54.657 surplus_hugepages=0 00:04:54.657 21:56:36 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:54.657 anon_hugepages=0 00:04:54.657 21:56:36 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.657 21:56:36 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:54.657 21:56:36 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:54.657 21:56:36 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:54.657 21:56:36 -- setup/common.sh@18 -- # local node= 00:04:54.657 21:56:36 -- setup/common.sh@19 -- # local var val 00:04:54.657 21:56:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.657 21:56:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.657 21:56:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.657 21:56:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.657 21:56:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.657 21:56:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026644 kB' 'MemFree: 25814140 kB' 'MemAvailable: 30993300 kB' 'Buffers: 2696 kB' 'Cached: 13232060 kB' 'SwapCached: 0 kB' 'Active: 9181776 kB' 'Inactive: 4630096 kB' 'Active(anon): 8611668 kB' 'Inactive(anon): 0 kB' 'Active(file): 570108 kB' 'Inactive(file): 4630096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580240 kB' 'Mapped: 217564 kB' 'Shmem: 8034552 kB' 'KReclaimable: 525296 kB' 'Slab: 900828 kB' 'SReclaimable: 525296 kB' 'SUnreclaim: 375532 kB' 'KernelStack: 12736 kB' 'PageTables: 8308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353348 kB' 'Committed_AS: 9780724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196636 kB' 'VmallocChunk: 0 kB' 'Percpu: 44736 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1707612 kB' 'DirectMap2M: 20232192 kB' 'DirectMap1G: 30408704 kB' 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.657 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.657 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.658 21:56:36 -- setup/common.sh@33 -- # echo 1024 00:04:54.658 21:56:36 -- setup/common.sh@33 -- # return 0 00:04:54.658 21:56:36 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.658 21:56:36 -- setup/hugepages.sh@112 -- # get_nodes 00:04:54.658 21:56:36 -- setup/hugepages.sh@27 -- # local node 00:04:54.658 21:56:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.658 21:56:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:54.658 21:56:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.658 21:56:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:54.658 21:56:36 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:54.658 21:56:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.658 21:56:36 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.658 21:56:36 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.658 21:56:36 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:54.658 21:56:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.658 21:56:36 -- setup/common.sh@18 -- # local node=0 00:04:54.658 21:56:36 -- setup/common.sh@19 -- # local var val 00:04:54.658 21:56:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.658 21:56:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.658 21:56:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:54.658 21:56:36 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:54.658 21:56:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.658 21:56:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 15653612 kB' 'MemUsed: 8965800 kB' 'SwapCached: 0 kB' 'Active: 5832220 kB' 'Inactive: 331748 kB' 'Active(anon): 5420700 kB' 'Inactive(anon): 0 kB' 'Active(file): 411520 kB' 'Inactive(file): 331748 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5732644 kB' 'Mapped: 155104 kB' 'AnonPages: 434448 kB' 'Shmem: 4989376 kB' 'KernelStack: 7240 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 137292 kB' 'Slab: 316188 kB' 'SReclaimable: 137292 kB' 'SUnreclaim: 178896 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.658 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.658 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # continue 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.659 21:56:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.659 21:56:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.659 21:56:36 -- setup/common.sh@33 -- # echo 0 00:04:54.659 21:56:36 -- setup/common.sh@33 -- # return 0 00:04:54.659 21:56:36 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.659 21:56:36 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.659 21:56:36 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.659 21:56:36 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.659 21:56:36 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:54.659 node0=1024 expecting 1024 00:04:54.659 21:56:36 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:54.659 00:04:54.659 real 0m3.239s 00:04:54.659 user 0m1.353s 00:04:54.659 sys 0m1.838s 00:04:54.659 21:56:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:54.659 21:56:36 -- common/autotest_common.sh@10 -- # set +x 00:04:54.659 ************************************ 00:04:54.659 END TEST no_shrink_alloc 00:04:54.659 ************************************ 00:04:54.659 21:56:36 -- setup/hugepages.sh@217 -- # clear_hp 00:04:54.659 21:56:36 -- setup/hugepages.sh@37 -- # local node hp 00:04:54.659 21:56:36 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:54.659 21:56:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:54.659 21:56:36 -- setup/hugepages.sh@41 -- # echo 0 00:04:54.659 21:56:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:54.659 21:56:36 -- setup/hugepages.sh@41 -- # echo 0 00:04:54.659 21:56:36 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:54.659 21:56:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:54.659 21:56:36 -- setup/hugepages.sh@41 -- # echo 0 00:04:54.659 21:56:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:54.659 21:56:36 -- setup/hugepages.sh@41 -- # echo 0 00:04:54.659 21:56:36 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:54.659 21:56:36 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:54.659 00:04:54.659 real 0m13.344s 00:04:54.659 user 0m5.085s 00:04:54.659 sys 0m7.066s 00:04:54.659 21:56:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:54.659 21:56:36 -- common/autotest_common.sh@10 -- # set +x 00:04:54.659 ************************************ 00:04:54.659 END TEST hugepages 00:04:54.659 ************************************ 00:04:54.659 21:56:36 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:54.659 21:56:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.659 21:56:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.659 21:56:36 -- common/autotest_common.sh@10 -- # set +x 00:04:54.917 ************************************ 00:04:54.917 START TEST driver 00:04:54.917 ************************************ 00:04:54.917 21:56:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:54.917 * Looking for test storage... 00:04:54.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:54.917 21:56:36 -- setup/driver.sh@68 -- # setup reset 00:04:54.917 21:56:36 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:54.917 21:56:36 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:57.446 21:56:39 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:57.446 21:56:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.446 21:56:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.446 21:56:39 -- common/autotest_common.sh@10 -- # set +x 00:04:57.446 ************************************ 00:04:57.446 START TEST guess_driver 00:04:57.446 ************************************ 00:04:57.446 21:56:39 -- common/autotest_common.sh@1111 -- # guess_driver 00:04:57.446 21:56:39 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:57.446 21:56:39 -- setup/driver.sh@47 -- # local fail=0 00:04:57.446 21:56:39 -- setup/driver.sh@49 -- # pick_driver 00:04:57.446 21:56:39 -- setup/driver.sh@36 -- # vfio 00:04:57.446 21:56:39 -- setup/driver.sh@21 -- # local iommu_grups 00:04:57.446 21:56:39 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:57.446 21:56:39 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:57.446 21:56:39 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:57.446 21:56:39 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:57.446 21:56:39 -- setup/driver.sh@29 -- # (( 143 > 0 )) 00:04:57.446 21:56:39 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:57.446 21:56:39 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:57.446 21:56:39 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:57.446 21:56:39 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:57.446 21:56:39 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:57.446 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:57.446 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:57.446 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:57.446 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:57.446 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:57.446 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:57.446 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:57.446 21:56:39 -- setup/driver.sh@30 -- # return 0 00:04:57.446 21:56:39 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:57.446 21:56:39 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:57.446 21:56:39 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:57.446 21:56:39 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:57.446 Looking for driver=vfio-pci 00:04:57.446 21:56:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:57.446 21:56:39 -- setup/driver.sh@45 -- # setup output config 00:04:57.446 21:56:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.446 21:56:39 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:58.910 21:56:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.910 21:56:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.910 21:56:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.910 21:56:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.910 21:56:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.910 21:56:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.910 21:56:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.910 21:56:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.910 21:56:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.910 21:56:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.910 21:56:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.910 21:56:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.910 21:56:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.910 21:56:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.910 21:56:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.910 21:56:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.910 21:56:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.911 21:56:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.911 21:56:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.911 21:56:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.911 21:56:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.911 21:56:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.911 21:56:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.911 21:56:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.911 21:56:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.911 21:56:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.911 21:56:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.911 21:56:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.911 21:56:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.911 21:56:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.911 21:56:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.911 21:56:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.911 21:56:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.911 21:56:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.911 21:56:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.911 21:56:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.911 21:56:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.911 21:56:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.911 21:56:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.911 21:56:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.911 21:56:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.911 21:56:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.911 21:56:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.911 21:56:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.911 21:56:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.911 21:56:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.911 21:56:40 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.911 21:56:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:59.841 21:56:41 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:59.841 21:56:41 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:59.841 21:56:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:59.841 21:56:41 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:59.841 21:56:41 -- setup/driver.sh@65 -- # setup reset 00:04:59.841 21:56:41 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:59.841 21:56:41 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:02.368 00:05:02.368 real 0m4.944s 00:05:02.368 user 0m1.153s 00:05:02.368 sys 0m2.039s 00:05:02.368 21:56:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:02.368 21:56:44 -- common/autotest_common.sh@10 -- # set +x 00:05:02.368 ************************************ 00:05:02.368 END TEST guess_driver 00:05:02.368 ************************************ 00:05:02.368 00:05:02.368 real 0m7.509s 00:05:02.368 user 0m1.810s 00:05:02.368 sys 0m3.183s 00:05:02.368 21:56:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:02.368 21:56:44 -- common/autotest_common.sh@10 -- # set +x 00:05:02.368 ************************************ 00:05:02.368 END TEST driver 00:05:02.368 ************************************ 00:05:02.368 21:56:44 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:02.368 21:56:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.368 21:56:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.368 21:56:44 -- common/autotest_common.sh@10 -- # set +x 00:05:02.368 ************************************ 00:05:02.368 START TEST devices 00:05:02.368 ************************************ 00:05:02.368 21:56:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:02.368 * Looking for test storage... 00:05:02.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:02.368 21:56:44 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:02.368 21:56:44 -- setup/devices.sh@192 -- # setup reset 00:05:02.368 21:56:44 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:02.368 21:56:44 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:04.268 21:56:46 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:04.268 21:56:46 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:04.268 21:56:46 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:04.268 21:56:46 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:04.268 21:56:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:04.268 21:56:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:04.268 21:56:46 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:04.268 21:56:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:04.268 21:56:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:04.268 21:56:46 -- setup/devices.sh@196 -- # blocks=() 00:05:04.268 21:56:46 -- setup/devices.sh@196 -- # declare -a blocks 00:05:04.268 21:56:46 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:04.268 21:56:46 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:04.268 21:56:46 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:04.268 21:56:46 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:04.268 21:56:46 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:04.268 21:56:46 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:04.268 21:56:46 -- setup/devices.sh@202 -- # pci=0000:82:00.0 00:05:04.268 21:56:46 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:05:04.268 21:56:46 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:04.268 21:56:46 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:04.268 21:56:46 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:04.268 No valid GPT data, bailing 00:05:04.268 21:56:46 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:04.268 21:56:46 -- scripts/common.sh@391 -- # pt= 00:05:04.268 21:56:46 -- scripts/common.sh@392 -- # return 1 00:05:04.268 21:56:46 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:04.268 21:56:46 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:04.268 21:56:46 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:04.268 21:56:46 -- setup/common.sh@80 -- # echo 1000204886016 00:05:04.268 21:56:46 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:04.268 21:56:46 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:04.269 21:56:46 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:82:00.0 00:05:04.269 21:56:46 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:04.269 21:56:46 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:04.269 21:56:46 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:04.269 21:56:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:04.269 21:56:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.269 21:56:46 -- common/autotest_common.sh@10 -- # set +x 00:05:04.269 ************************************ 00:05:04.269 START TEST nvme_mount 00:05:04.269 ************************************ 00:05:04.269 21:56:46 -- common/autotest_common.sh@1111 -- # nvme_mount 00:05:04.269 21:56:46 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:04.269 21:56:46 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:04.269 21:56:46 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:04.269 21:56:46 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:04.269 21:56:46 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:04.269 21:56:46 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:04.269 21:56:46 -- setup/common.sh@40 -- # local part_no=1 00:05:04.269 21:56:46 -- setup/common.sh@41 -- # local size=1073741824 00:05:04.269 21:56:46 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:04.269 21:56:46 -- setup/common.sh@44 -- # parts=() 00:05:04.269 21:56:46 -- setup/common.sh@44 -- # local parts 00:05:04.269 21:56:46 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:04.269 21:56:46 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:04.269 21:56:46 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:04.269 21:56:46 -- setup/common.sh@46 -- # (( part++ )) 00:05:04.269 21:56:46 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:04.269 21:56:46 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:04.269 21:56:46 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:04.269 21:56:46 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:05.202 Creating new GPT entries in memory. 00:05:05.202 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:05.202 other utilities. 00:05:05.202 21:56:47 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:05.202 21:56:47 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:05.202 21:56:47 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:05.202 21:56:47 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:05.202 21:56:47 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:06.575 Creating new GPT entries in memory. 00:05:06.575 The operation has completed successfully. 00:05:06.575 21:56:48 -- setup/common.sh@57 -- # (( part++ )) 00:05:06.575 21:56:48 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:06.575 21:56:48 -- setup/common.sh@62 -- # wait 3822604 00:05:06.575 21:56:48 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.575 21:56:48 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:06.575 21:56:48 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.575 21:56:48 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:06.575 21:56:48 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:06.575 21:56:48 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.575 21:56:48 -- setup/devices.sh@105 -- # verify 0000:82:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:06.575 21:56:48 -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:05:06.575 21:56:48 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:06.575 21:56:48 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.575 21:56:48 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:06.575 21:56:48 -- setup/devices.sh@53 -- # local found=0 00:05:06.575 21:56:48 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:06.575 21:56:48 -- setup/devices.sh@56 -- # : 00:05:06.575 21:56:48 -- setup/devices.sh@59 -- # local pci status 00:05:06.576 21:56:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.576 21:56:48 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:05:06.576 21:56:48 -- setup/devices.sh@47 -- # setup output config 00:05:06.576 21:56:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.576 21:56:48 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:07.535 21:56:49 -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.535 21:56:49 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:07.535 21:56:49 -- setup/devices.sh@63 -- # found=1 00:05:07.535 21:56:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.535 21:56:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.535 21:56:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.535 21:56:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.535 21:56:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.535 21:56:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.535 21:56:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.535 21:56:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.535 21:56:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.535 21:56:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.535 21:56:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.535 21:56:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.535 21:56:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.535 21:56:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.535 21:56:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.535 21:56:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.535 21:56:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.535 21:56:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.535 21:56:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.535 21:56:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.535 21:56:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.535 21:56:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.535 21:56:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.535 21:56:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.535 21:56:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.535 21:56:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.535 21:56:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.535 21:56:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.535 21:56:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.535 21:56:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.535 21:56:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.535 21:56:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.535 21:56:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.793 21:56:49 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:07.793 21:56:49 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:07.793 21:56:49 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.793 21:56:49 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:07.793 21:56:49 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:07.793 21:56:49 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:07.793 21:56:49 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.793 21:56:49 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.793 21:56:49 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:07.793 21:56:49 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:07.793 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:07.793 21:56:49 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:07.793 21:56:49 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:08.051 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:08.051 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:08.051 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:08.051 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:08.051 21:56:50 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:08.051 21:56:50 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:08.051 21:56:50 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.051 21:56:50 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:08.051 21:56:50 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:08.051 21:56:50 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.051 21:56:50 -- setup/devices.sh@116 -- # verify 0000:82:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:08.051 21:56:50 -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:05:08.051 21:56:50 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:08.051 21:56:50 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.051 21:56:50 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:08.051 21:56:50 -- setup/devices.sh@53 -- # local found=0 00:05:08.051 21:56:50 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:08.051 21:56:50 -- setup/devices.sh@56 -- # : 00:05:08.051 21:56:50 -- setup/devices.sh@59 -- # local pci status 00:05:08.051 21:56:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.051 21:56:50 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:05:08.051 21:56:50 -- setup/devices.sh@47 -- # setup output config 00:05:08.051 21:56:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.051 21:56:50 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:09.460 21:56:51 -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:09.460 21:56:51 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:09.460 21:56:51 -- setup/devices.sh@63 -- # found=1 00:05:09.460 21:56:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.460 21:56:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:09.460 21:56:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.460 21:56:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:09.460 21:56:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.461 21:56:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:09.461 21:56:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.461 21:56:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:09.461 21:56:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.461 21:56:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:09.461 21:56:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.461 21:56:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:09.461 21:56:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.461 21:56:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:09.461 21:56:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.461 21:56:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:09.461 21:56:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.461 21:56:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:09.461 21:56:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.461 21:56:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:09.461 21:56:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.461 21:56:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:09.461 21:56:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.461 21:56:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:09.461 21:56:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.461 21:56:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:09.461 21:56:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.461 21:56:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:09.461 21:56:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.461 21:56:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:09.461 21:56:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.461 21:56:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:09.461 21:56:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.461 21:56:51 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:09.461 21:56:51 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:09.461 21:56:51 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:09.461 21:56:51 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:09.461 21:56:51 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:09.461 21:56:51 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:09.461 21:56:51 -- setup/devices.sh@125 -- # verify 0000:82:00.0 data@nvme0n1 '' '' 00:05:09.461 21:56:51 -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:05:09.461 21:56:51 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:09.461 21:56:51 -- setup/devices.sh@50 -- # local mount_point= 00:05:09.461 21:56:51 -- setup/devices.sh@51 -- # local test_file= 00:05:09.461 21:56:51 -- setup/devices.sh@53 -- # local found=0 00:05:09.461 21:56:51 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:09.461 21:56:51 -- setup/devices.sh@59 -- # local pci status 00:05:09.461 21:56:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.461 21:56:51 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:05:09.461 21:56:51 -- setup/devices.sh@47 -- # setup output config 00:05:09.461 21:56:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.461 21:56:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:10.836 21:56:52 -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:10.836 21:56:52 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:10.836 21:56:52 -- setup/devices.sh@63 -- # found=1 00:05:10.836 21:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.836 21:56:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:10.836 21:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.836 21:56:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:10.836 21:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.836 21:56:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:10.836 21:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.836 21:56:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:10.836 21:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.836 21:56:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:10.836 21:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.836 21:56:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:10.836 21:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.836 21:56:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:10.836 21:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.836 21:56:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:10.836 21:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.836 21:56:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:10.836 21:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.836 21:56:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:10.836 21:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.836 21:56:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:10.836 21:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.836 21:56:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:10.836 21:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.836 21:56:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:10.836 21:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.836 21:56:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:10.836 21:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.836 21:56:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:10.836 21:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.836 21:56:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:10.836 21:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.094 21:56:53 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:11.095 21:56:53 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:11.095 21:56:53 -- setup/devices.sh@68 -- # return 0 00:05:11.095 21:56:53 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:11.095 21:56:53 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.095 21:56:53 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:11.095 21:56:53 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:11.095 21:56:53 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:11.095 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:11.095 00:05:11.095 real 0m6.778s 00:05:11.095 user 0m1.571s 00:05:11.095 sys 0m2.835s 00:05:11.095 21:56:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:11.095 21:56:53 -- common/autotest_common.sh@10 -- # set +x 00:05:11.095 ************************************ 00:05:11.095 END TEST nvme_mount 00:05:11.095 ************************************ 00:05:11.095 21:56:53 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:11.095 21:56:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.095 21:56:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.095 21:56:53 -- common/autotest_common.sh@10 -- # set +x 00:05:11.095 ************************************ 00:05:11.095 START TEST dm_mount 00:05:11.095 ************************************ 00:05:11.095 21:56:53 -- common/autotest_common.sh@1111 -- # dm_mount 00:05:11.095 21:56:53 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:11.095 21:56:53 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:11.095 21:56:53 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:11.095 21:56:53 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:11.095 21:56:53 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:11.095 21:56:53 -- setup/common.sh@40 -- # local part_no=2 00:05:11.095 21:56:53 -- setup/common.sh@41 -- # local size=1073741824 00:05:11.095 21:56:53 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:11.095 21:56:53 -- setup/common.sh@44 -- # parts=() 00:05:11.095 21:56:53 -- setup/common.sh@44 -- # local parts 00:05:11.095 21:56:53 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:11.095 21:56:53 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:11.095 21:56:53 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:11.095 21:56:53 -- setup/common.sh@46 -- # (( part++ )) 00:05:11.095 21:56:53 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:11.095 21:56:53 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:11.095 21:56:53 -- setup/common.sh@46 -- # (( part++ )) 00:05:11.095 21:56:53 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:11.095 21:56:53 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:11.095 21:56:53 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:11.095 21:56:53 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:12.469 Creating new GPT entries in memory. 00:05:12.469 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:12.469 other utilities. 00:05:12.469 21:56:54 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:12.469 21:56:54 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:12.469 21:56:54 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:12.469 21:56:54 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:12.469 21:56:54 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:13.404 Creating new GPT entries in memory. 00:05:13.404 The operation has completed successfully. 00:05:13.404 21:56:55 -- setup/common.sh@57 -- # (( part++ )) 00:05:13.404 21:56:55 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:13.404 21:56:55 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:13.404 21:56:55 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:13.404 21:56:55 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:14.338 The operation has completed successfully. 00:05:14.338 21:56:56 -- setup/common.sh@57 -- # (( part++ )) 00:05:14.339 21:56:56 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:14.339 21:56:56 -- setup/common.sh@62 -- # wait 3825026 00:05:14.339 21:56:56 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:14.339 21:56:56 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:14.339 21:56:56 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:14.339 21:56:56 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:14.339 21:56:56 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:14.339 21:56:56 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:14.339 21:56:56 -- setup/devices.sh@161 -- # break 00:05:14.339 21:56:56 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:14.339 21:56:56 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:14.339 21:56:56 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:14.339 21:56:56 -- setup/devices.sh@166 -- # dm=dm-0 00:05:14.339 21:56:56 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:14.339 21:56:56 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:14.339 21:56:56 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:14.339 21:56:56 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:14.339 21:56:56 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:14.339 21:56:56 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:14.339 21:56:56 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:14.339 21:56:56 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:14.339 21:56:56 -- setup/devices.sh@174 -- # verify 0000:82:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:14.339 21:56:56 -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:05:14.339 21:56:56 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:14.339 21:56:56 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:14.339 21:56:56 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:14.339 21:56:56 -- setup/devices.sh@53 -- # local found=0 00:05:14.339 21:56:56 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:14.339 21:56:56 -- setup/devices.sh@56 -- # : 00:05:14.339 21:56:56 -- setup/devices.sh@59 -- # local pci status 00:05:14.339 21:56:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.339 21:56:56 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:05:14.339 21:56:56 -- setup/devices.sh@47 -- # setup output config 00:05:14.339 21:56:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.339 21:56:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:15.712 21:56:57 -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:15.712 21:56:57 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:15.712 21:56:57 -- setup/devices.sh@63 -- # found=1 00:05:15.712 21:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.712 21:56:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:15.712 21:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.712 21:56:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:15.712 21:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.712 21:56:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:15.712 21:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.712 21:56:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:15.712 21:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.712 21:56:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:15.712 21:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.712 21:56:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:15.712 21:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.712 21:56:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:15.712 21:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.712 21:56:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:15.712 21:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.712 21:56:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:15.712 21:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.712 21:56:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:15.712 21:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.712 21:56:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:15.712 21:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.712 21:56:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:15.712 21:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.712 21:56:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:15.712 21:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.712 21:56:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:15.712 21:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.712 21:56:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:15.712 21:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.712 21:56:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:15.712 21:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.712 21:56:57 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:15.712 21:56:57 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:15.712 21:56:57 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:15.712 21:56:57 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:15.712 21:56:57 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:15.712 21:56:57 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:15.712 21:56:57 -- setup/devices.sh@184 -- # verify 0000:82:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:15.712 21:56:57 -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:05:15.712 21:56:57 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:15.712 21:56:57 -- setup/devices.sh@50 -- # local mount_point= 00:05:15.712 21:56:57 -- setup/devices.sh@51 -- # local test_file= 00:05:15.712 21:56:57 -- setup/devices.sh@53 -- # local found=0 00:05:15.712 21:56:57 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:15.712 21:56:57 -- setup/devices.sh@59 -- # local pci status 00:05:15.712 21:56:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.712 21:56:57 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:05:15.712 21:56:57 -- setup/devices.sh@47 -- # setup output config 00:05:15.712 21:56:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.712 21:56:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:17.084 21:56:59 -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:17.084 21:56:59 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:17.084 21:56:59 -- setup/devices.sh@63 -- # found=1 00:05:17.084 21:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.084 21:56:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:17.084 21:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.084 21:56:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:17.084 21:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.084 21:56:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:17.084 21:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.084 21:56:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:17.084 21:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.084 21:56:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:17.084 21:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.084 21:56:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:17.084 21:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.084 21:56:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:17.084 21:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.084 21:56:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:17.084 21:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.084 21:56:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:17.084 21:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.084 21:56:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:17.084 21:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.084 21:56:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:17.084 21:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.084 21:56:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:17.084 21:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.084 21:56:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:17.084 21:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.084 21:56:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:17.084 21:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.084 21:56:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:17.084 21:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.084 21:56:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:17.084 21:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.084 21:56:59 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:17.084 21:56:59 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:17.084 21:56:59 -- setup/devices.sh@68 -- # return 0 00:05:17.084 21:56:59 -- setup/devices.sh@187 -- # cleanup_dm 00:05:17.084 21:56:59 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:17.084 21:56:59 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:17.084 21:56:59 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:17.084 21:56:59 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:17.084 21:56:59 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:17.084 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:17.084 21:56:59 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:17.084 21:56:59 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:17.084 00:05:17.084 real 0m5.904s 00:05:17.084 user 0m1.052s 00:05:17.084 sys 0m1.770s 00:05:17.084 21:56:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:17.084 21:56:59 -- common/autotest_common.sh@10 -- # set +x 00:05:17.084 ************************************ 00:05:17.084 END TEST dm_mount 00:05:17.084 ************************************ 00:05:17.084 21:56:59 -- setup/devices.sh@1 -- # cleanup 00:05:17.084 21:56:59 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:17.084 21:56:59 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:17.084 21:56:59 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:17.084 21:56:59 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:17.084 21:56:59 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:17.084 21:56:59 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:17.342 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:17.342 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:17.342 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:17.342 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:17.342 21:56:59 -- setup/devices.sh@12 -- # cleanup_dm 00:05:17.342 21:56:59 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:17.342 21:56:59 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:17.342 21:56:59 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:17.342 21:56:59 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:17.342 21:56:59 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:17.342 21:56:59 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:17.342 00:05:17.342 real 0m14.991s 00:05:17.342 user 0m3.415s 00:05:17.342 sys 0m5.872s 00:05:17.342 21:56:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:17.342 21:56:59 -- common/autotest_common.sh@10 -- # set +x 00:05:17.342 ************************************ 00:05:17.342 END TEST devices 00:05:17.342 ************************************ 00:05:17.342 00:05:17.342 real 0m47.674s 00:05:17.342 user 0m14.007s 00:05:17.342 sys 0m22.508s 00:05:17.342 21:56:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:17.342 21:56:59 -- common/autotest_common.sh@10 -- # set +x 00:05:17.342 ************************************ 00:05:17.342 END TEST setup.sh 00:05:17.342 ************************************ 00:05:17.342 21:56:59 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:18.714 Hugepages 00:05:18.714 node hugesize free / total 00:05:18.714 node0 1048576kB 0 / 0 00:05:18.714 node0 2048kB 2048 / 2048 00:05:18.714 node1 1048576kB 0 / 0 00:05:18.714 node1 2048kB 0 / 0 00:05:18.714 00:05:18.714 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:18.714 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:18.714 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:18.714 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:18.714 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:18.714 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:18.714 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:18.715 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:18.715 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:18.715 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:18.715 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:18.715 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:18.715 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:18.972 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:18.972 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:18.972 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:18.972 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:18.972 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:18.972 21:57:01 -- spdk/autotest.sh@130 -- # uname -s 00:05:18.972 21:57:01 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:18.972 21:57:01 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:18.972 21:57:01 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:20.346 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:20.346 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:20.346 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:20.346 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:20.346 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:20.346 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:20.346 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:20.346 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:20.346 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:20.346 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:20.346 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:20.346 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:20.346 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:20.346 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:20.346 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:20.346 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:21.278 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:05:21.278 21:57:03 -- common/autotest_common.sh@1518 -- # sleep 1 00:05:22.650 21:57:04 -- common/autotest_common.sh@1519 -- # bdfs=() 00:05:22.650 21:57:04 -- common/autotest_common.sh@1519 -- # local bdfs 00:05:22.650 21:57:04 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:22.650 21:57:04 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:22.650 21:57:04 -- common/autotest_common.sh@1499 -- # bdfs=() 00:05:22.650 21:57:04 -- common/autotest_common.sh@1499 -- # local bdfs 00:05:22.650 21:57:04 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:22.650 21:57:04 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:22.650 21:57:04 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:05:22.650 21:57:04 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:05:22.650 21:57:04 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:82:00.0 00:05:22.650 21:57:04 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:23.583 Waiting for block devices as requested 00:05:23.840 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:05:23.840 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:23.840 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:23.840 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:24.096 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:24.096 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:24.096 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:24.096 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:24.353 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:24.353 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:24.353 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:24.353 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:24.610 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:24.610 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:24.610 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:24.610 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:24.868 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:24.868 21:57:06 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:24.868 21:57:06 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:82:00.0 00:05:24.868 21:57:06 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:05:24.868 21:57:06 -- common/autotest_common.sh@1488 -- # grep 0000:82:00.0/nvme/nvme 00:05:24.868 21:57:06 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:05:24.868 21:57:06 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 ]] 00:05:24.868 21:57:06 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:05:24.868 21:57:06 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:05:24.868 21:57:06 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:24.868 21:57:06 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:24.868 21:57:06 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:24.868 21:57:06 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:24.868 21:57:06 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:24.868 21:57:06 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:05:24.868 21:57:06 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:24.868 21:57:06 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:24.868 21:57:06 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:24.868 21:57:06 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:24.868 21:57:06 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:24.868 21:57:06 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:24.868 21:57:06 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:24.868 21:57:06 -- common/autotest_common.sh@1543 -- # continue 00:05:24.868 21:57:06 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:24.868 21:57:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:24.868 21:57:06 -- common/autotest_common.sh@10 -- # set +x 00:05:24.868 21:57:07 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:24.868 21:57:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:24.868 21:57:07 -- common/autotest_common.sh@10 -- # set +x 00:05:24.868 21:57:07 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:26.239 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:26.239 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:26.239 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:26.239 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:26.239 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:26.239 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:26.239 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:26.239 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:26.239 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:26.239 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:26.239 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:26.239 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:26.239 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:26.239 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:26.239 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:26.239 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:27.172 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:05:27.430 21:57:09 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:27.430 21:57:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:27.430 21:57:09 -- common/autotest_common.sh@10 -- # set +x 00:05:27.430 21:57:09 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:27.430 21:57:09 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:05:27.430 21:57:09 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:05:27.430 21:57:09 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:27.430 21:57:09 -- common/autotest_common.sh@1563 -- # local bdfs 00:05:27.430 21:57:09 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:05:27.430 21:57:09 -- common/autotest_common.sh@1499 -- # bdfs=() 00:05:27.430 21:57:09 -- common/autotest_common.sh@1499 -- # local bdfs 00:05:27.430 21:57:09 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:27.430 21:57:09 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:27.430 21:57:09 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:05:27.430 21:57:09 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:05:27.430 21:57:09 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:82:00.0 00:05:27.430 21:57:09 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:05:27.430 21:57:09 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:82:00.0/device 00:05:27.430 21:57:09 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:27.430 21:57:09 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:27.430 21:57:09 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:27.430 21:57:09 -- common/autotest_common.sh@1572 -- # printf '%s\n' 0000:82:00.0 00:05:27.430 21:57:09 -- common/autotest_common.sh@1578 -- # [[ -z 0000:82:00.0 ]] 00:05:27.430 21:57:09 -- common/autotest_common.sh@1583 -- # spdk_tgt_pid=3830882 00:05:27.430 21:57:09 -- common/autotest_common.sh@1582 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.430 21:57:09 -- common/autotest_common.sh@1584 -- # waitforlisten 3830882 00:05:27.430 21:57:09 -- common/autotest_common.sh@817 -- # '[' -z 3830882 ']' 00:05:27.430 21:57:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.430 21:57:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:27.431 21:57:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.431 21:57:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:27.431 21:57:09 -- common/autotest_common.sh@10 -- # set +x 00:05:27.431 [2024-04-24 21:57:09.678947] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:05:27.431 [2024-04-24 21:57:09.679038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3830882 ] 00:05:27.688 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.688 [2024-04-24 21:57:09.747442] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.688 [2024-04-24 21:57:09.867495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.946 21:57:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:27.946 21:57:10 -- common/autotest_common.sh@850 -- # return 0 00:05:27.946 21:57:10 -- common/autotest_common.sh@1586 -- # bdf_id=0 00:05:27.946 21:57:10 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:05:27.946 21:57:10 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:82:00.0 00:05:31.225 nvme0n1 00:05:31.225 21:57:13 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:31.486 [2024-04-24 21:57:13.535913] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:31.486 [2024-04-24 21:57:13.535965] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:31.486 request: 00:05:31.486 { 00:05:31.486 "nvme_ctrlr_name": "nvme0", 00:05:31.486 "password": "test", 00:05:31.486 "method": "bdev_nvme_opal_revert", 00:05:31.486 "req_id": 1 00:05:31.486 } 00:05:31.486 Got JSON-RPC error response 00:05:31.486 response: 00:05:31.486 { 00:05:31.486 "code": -32603, 00:05:31.486 "message": "Internal error" 00:05:31.486 } 00:05:31.486 21:57:13 -- common/autotest_common.sh@1590 -- # true 00:05:31.486 21:57:13 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:05:31.486 21:57:13 -- common/autotest_common.sh@1594 -- # killprocess 3830882 00:05:31.486 21:57:13 -- common/autotest_common.sh@936 -- # '[' -z 3830882 ']' 00:05:31.486 21:57:13 -- common/autotest_common.sh@940 -- # kill -0 3830882 00:05:31.486 21:57:13 -- common/autotest_common.sh@941 -- # uname 00:05:31.486 21:57:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:31.486 21:57:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3830882 00:05:31.486 21:57:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:31.486 21:57:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:31.486 21:57:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3830882' 00:05:31.486 killing process with pid 3830882 00:05:31.486 21:57:13 -- common/autotest_common.sh@955 -- # kill 3830882 00:05:31.486 21:57:13 -- common/autotest_common.sh@960 -- # wait 3830882 00:05:33.381 21:57:15 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:33.381 21:57:15 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:33.381 21:57:15 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:33.381 21:57:15 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:33.381 21:57:15 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:33.381 21:57:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:33.382 21:57:15 -- common/autotest_common.sh@10 -- # set +x 00:05:33.382 21:57:15 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:33.382 21:57:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.382 21:57:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.382 21:57:15 -- common/autotest_common.sh@10 -- # set +x 00:05:33.382 ************************************ 00:05:33.382 START TEST env 00:05:33.382 ************************************ 00:05:33.382 21:57:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:33.382 * Looking for test storage... 00:05:33.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:33.382 21:57:15 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:33.382 21:57:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.382 21:57:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.382 21:57:15 -- common/autotest_common.sh@10 -- # set +x 00:05:33.639 ************************************ 00:05:33.639 START TEST env_memory 00:05:33.639 ************************************ 00:05:33.639 21:57:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:33.639 00:05:33.639 00:05:33.639 CUnit - A unit testing framework for C - Version 2.1-3 00:05:33.639 http://cunit.sourceforge.net/ 00:05:33.639 00:05:33.639 00:05:33.639 Suite: memory 00:05:33.639 Test: alloc and free memory map ...[2024-04-24 21:57:15.752869] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:33.639 passed 00:05:33.639 Test: mem map translation ...[2024-04-24 21:57:15.782242] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:33.639 [2024-04-24 21:57:15.782275] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:33.639 [2024-04-24 21:57:15.782343] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:33.639 [2024-04-24 21:57:15.782363] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:33.639 passed 00:05:33.639 Test: mem map registration ...[2024-04-24 21:57:15.843862] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:33.639 [2024-04-24 21:57:15.843892] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:33.639 passed 00:05:33.897 Test: mem map adjacent registrations ...passed 00:05:33.897 00:05:33.897 Run Summary: Type Total Ran Passed Failed Inactive 00:05:33.897 suites 1 1 n/a 0 0 00:05:33.897 tests 4 4 4 0 0 00:05:33.897 asserts 152 152 152 0 n/a 00:05:33.897 00:05:33.897 Elapsed time = 0.205 seconds 00:05:33.897 00:05:33.897 real 0m0.214s 00:05:33.897 user 0m0.205s 00:05:33.897 sys 0m0.008s 00:05:33.897 21:57:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:33.897 21:57:15 -- common/autotest_common.sh@10 -- # set +x 00:05:33.897 ************************************ 00:05:33.897 END TEST env_memory 00:05:33.897 ************************************ 00:05:33.897 21:57:15 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:33.897 21:57:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.897 21:57:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.897 21:57:15 -- common/autotest_common.sh@10 -- # set +x 00:05:33.897 ************************************ 00:05:33.897 START TEST env_vtophys 00:05:33.897 ************************************ 00:05:33.897 21:57:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:33.897 EAL: lib.eal log level changed from notice to debug 00:05:33.897 EAL: Detected lcore 0 as core 0 on socket 0 00:05:33.897 EAL: Detected lcore 1 as core 1 on socket 0 00:05:33.897 EAL: Detected lcore 2 as core 2 on socket 0 00:05:33.897 EAL: Detected lcore 3 as core 3 on socket 0 00:05:33.897 EAL: Detected lcore 4 as core 4 on socket 0 00:05:33.897 EAL: Detected lcore 5 as core 5 on socket 0 00:05:33.897 EAL: Detected lcore 6 as core 8 on socket 0 00:05:33.897 EAL: Detected lcore 7 as core 9 on socket 0 00:05:33.897 EAL: Detected lcore 8 as core 10 on socket 0 00:05:33.897 EAL: Detected lcore 9 as core 11 on socket 0 00:05:33.897 EAL: Detected lcore 10 as core 12 on socket 0 00:05:33.897 EAL: Detected lcore 11 as core 13 on socket 0 00:05:33.897 EAL: Detected lcore 12 as core 0 on socket 1 00:05:33.897 EAL: Detected lcore 13 as core 1 on socket 1 00:05:33.897 EAL: Detected lcore 14 as core 2 on socket 1 00:05:33.897 EAL: Detected lcore 15 as core 3 on socket 1 00:05:33.897 EAL: Detected lcore 16 as core 4 on socket 1 00:05:33.897 EAL: Detected lcore 17 as core 5 on socket 1 00:05:33.897 EAL: Detected lcore 18 as core 8 on socket 1 00:05:33.897 EAL: Detected lcore 19 as core 9 on socket 1 00:05:33.897 EAL: Detected lcore 20 as core 10 on socket 1 00:05:33.897 EAL: Detected lcore 21 as core 11 on socket 1 00:05:33.897 EAL: Detected lcore 22 as core 12 on socket 1 00:05:33.897 EAL: Detected lcore 23 as core 13 on socket 1 00:05:33.897 EAL: Detected lcore 24 as core 0 on socket 0 00:05:33.897 EAL: Detected lcore 25 as core 1 on socket 0 00:05:33.897 EAL: Detected lcore 26 as core 2 on socket 0 00:05:33.897 EAL: Detected lcore 27 as core 3 on socket 0 00:05:33.897 EAL: Detected lcore 28 as core 4 on socket 0 00:05:33.897 EAL: Detected lcore 29 as core 5 on socket 0 00:05:33.897 EAL: Detected lcore 30 as core 8 on socket 0 00:05:33.897 EAL: Detected lcore 31 as core 9 on socket 0 00:05:33.897 EAL: Detected lcore 32 as core 10 on socket 0 00:05:33.897 EAL: Detected lcore 33 as core 11 on socket 0 00:05:33.897 EAL: Detected lcore 34 as core 12 on socket 0 00:05:33.897 EAL: Detected lcore 35 as core 13 on socket 0 00:05:33.897 EAL: Detected lcore 36 as core 0 on socket 1 00:05:33.897 EAL: Detected lcore 37 as core 1 on socket 1 00:05:33.897 EAL: Detected lcore 38 as core 2 on socket 1 00:05:33.897 EAL: Detected lcore 39 as core 3 on socket 1 00:05:33.897 EAL: Detected lcore 40 as core 4 on socket 1 00:05:33.897 EAL: Detected lcore 41 as core 5 on socket 1 00:05:33.897 EAL: Detected lcore 42 as core 8 on socket 1 00:05:33.897 EAL: Detected lcore 43 as core 9 on socket 1 00:05:33.897 EAL: Detected lcore 44 as core 10 on socket 1 00:05:33.897 EAL: Detected lcore 45 as core 11 on socket 1 00:05:33.897 EAL: Detected lcore 46 as core 12 on socket 1 00:05:33.897 EAL: Detected lcore 47 as core 13 on socket 1 00:05:33.897 EAL: Maximum logical cores by configuration: 128 00:05:33.897 EAL: Detected CPU lcores: 48 00:05:33.897 EAL: Detected NUMA nodes: 2 00:05:33.897 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:33.897 EAL: Detected shared linkage of DPDK 00:05:33.897 EAL: No shared files mode enabled, IPC will be disabled 00:05:33.897 EAL: Bus pci wants IOVA as 'DC' 00:05:33.897 EAL: Buses did not request a specific IOVA mode. 00:05:33.897 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:33.898 EAL: Selected IOVA mode 'VA' 00:05:33.898 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.898 EAL: Probing VFIO support... 00:05:33.898 EAL: IOMMU type 1 (Type 1) is supported 00:05:33.898 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:33.898 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:33.898 EAL: VFIO support initialized 00:05:33.898 EAL: Ask a virtual area of 0x2e000 bytes 00:05:33.898 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:33.898 EAL: Setting up physically contiguous memory... 00:05:33.898 EAL: Setting maximum number of open files to 524288 00:05:33.898 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:33.898 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:33.898 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:33.898 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.898 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:33.898 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:33.898 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.898 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:33.898 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:33.898 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.898 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:33.898 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:33.898 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.898 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:33.898 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:33.898 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.898 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:33.898 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:33.898 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.898 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:33.898 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:33.898 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.898 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:33.898 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:33.898 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.898 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:33.898 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:33.898 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:33.898 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.898 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:33.898 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:33.898 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.898 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:33.898 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:33.898 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.898 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:33.898 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:33.898 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.898 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:33.898 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:33.898 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.898 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:33.898 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:33.898 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.898 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:33.898 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:33.898 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.898 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:33.898 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:33.898 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.898 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:33.898 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:33.898 EAL: Hugepages will be freed exactly as allocated. 00:05:33.898 EAL: No shared files mode enabled, IPC is disabled 00:05:33.898 EAL: No shared files mode enabled, IPC is disabled 00:05:33.898 EAL: TSC frequency is ~2700000 KHz 00:05:33.898 EAL: Main lcore 0 is ready (tid=7feb775a0a00;cpuset=[0]) 00:05:33.898 EAL: Trying to obtain current memory policy. 00:05:33.898 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.898 EAL: Restoring previous memory policy: 0 00:05:33.898 EAL: request: mp_malloc_sync 00:05:33.898 EAL: No shared files mode enabled, IPC is disabled 00:05:34.156 EAL: Heap on socket 0 was expanded by 2MB 00:05:34.156 EAL: No shared files mode enabled, IPC is disabled 00:05:34.156 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:34.156 EAL: Mem event callback 'spdk:(nil)' registered 00:05:34.156 00:05:34.156 00:05:34.156 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.156 http://cunit.sourceforge.net/ 00:05:34.156 00:05:34.156 00:05:34.156 Suite: components_suite 00:05:34.156 Test: vtophys_malloc_test ...passed 00:05:34.156 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:34.156 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.156 EAL: Restoring previous memory policy: 4 00:05:34.156 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.156 EAL: request: mp_malloc_sync 00:05:34.156 EAL: No shared files mode enabled, IPC is disabled 00:05:34.157 EAL: Heap on socket 0 was expanded by 4MB 00:05:34.157 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.157 EAL: request: mp_malloc_sync 00:05:34.157 EAL: No shared files mode enabled, IPC is disabled 00:05:34.157 EAL: Heap on socket 0 was shrunk by 4MB 00:05:34.157 EAL: Trying to obtain current memory policy. 00:05:34.157 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.157 EAL: Restoring previous memory policy: 4 00:05:34.157 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.157 EAL: request: mp_malloc_sync 00:05:34.157 EAL: No shared files mode enabled, IPC is disabled 00:05:34.157 EAL: Heap on socket 0 was expanded by 6MB 00:05:34.157 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.157 EAL: request: mp_malloc_sync 00:05:34.157 EAL: No shared files mode enabled, IPC is disabled 00:05:34.157 EAL: Heap on socket 0 was shrunk by 6MB 00:05:34.157 EAL: Trying to obtain current memory policy. 00:05:34.157 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.157 EAL: Restoring previous memory policy: 4 00:05:34.157 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.157 EAL: request: mp_malloc_sync 00:05:34.157 EAL: No shared files mode enabled, IPC is disabled 00:05:34.157 EAL: Heap on socket 0 was expanded by 10MB 00:05:34.157 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.157 EAL: request: mp_malloc_sync 00:05:34.157 EAL: No shared files mode enabled, IPC is disabled 00:05:34.157 EAL: Heap on socket 0 was shrunk by 10MB 00:05:34.157 EAL: Trying to obtain current memory policy. 00:05:34.157 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.157 EAL: Restoring previous memory policy: 4 00:05:34.157 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.157 EAL: request: mp_malloc_sync 00:05:34.157 EAL: No shared files mode enabled, IPC is disabled 00:05:34.157 EAL: Heap on socket 0 was expanded by 18MB 00:05:34.157 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.157 EAL: request: mp_malloc_sync 00:05:34.157 EAL: No shared files mode enabled, IPC is disabled 00:05:34.157 EAL: Heap on socket 0 was shrunk by 18MB 00:05:34.157 EAL: Trying to obtain current memory policy. 00:05:34.157 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.157 EAL: Restoring previous memory policy: 4 00:05:34.157 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.157 EAL: request: mp_malloc_sync 00:05:34.157 EAL: No shared files mode enabled, IPC is disabled 00:05:34.157 EAL: Heap on socket 0 was expanded by 34MB 00:05:34.157 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.157 EAL: request: mp_malloc_sync 00:05:34.157 EAL: No shared files mode enabled, IPC is disabled 00:05:34.157 EAL: Heap on socket 0 was shrunk by 34MB 00:05:34.157 EAL: Trying to obtain current memory policy. 00:05:34.157 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.157 EAL: Restoring previous memory policy: 4 00:05:34.157 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.157 EAL: request: mp_malloc_sync 00:05:34.157 EAL: No shared files mode enabled, IPC is disabled 00:05:34.157 EAL: Heap on socket 0 was expanded by 66MB 00:05:34.157 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.157 EAL: request: mp_malloc_sync 00:05:34.157 EAL: No shared files mode enabled, IPC is disabled 00:05:34.157 EAL: Heap on socket 0 was shrunk by 66MB 00:05:34.157 EAL: Trying to obtain current memory policy. 00:05:34.157 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.157 EAL: Restoring previous memory policy: 4 00:05:34.157 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.157 EAL: request: mp_malloc_sync 00:05:34.157 EAL: No shared files mode enabled, IPC is disabled 00:05:34.157 EAL: Heap on socket 0 was expanded by 130MB 00:05:34.157 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.157 EAL: request: mp_malloc_sync 00:05:34.157 EAL: No shared files mode enabled, IPC is disabled 00:05:34.157 EAL: Heap on socket 0 was shrunk by 130MB 00:05:34.157 EAL: Trying to obtain current memory policy. 00:05:34.157 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.415 EAL: Restoring previous memory policy: 4 00:05:34.415 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.415 EAL: request: mp_malloc_sync 00:05:34.415 EAL: No shared files mode enabled, IPC is disabled 00:05:34.415 EAL: Heap on socket 0 was expanded by 258MB 00:05:34.415 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.415 EAL: request: mp_malloc_sync 00:05:34.415 EAL: No shared files mode enabled, IPC is disabled 00:05:34.415 EAL: Heap on socket 0 was shrunk by 258MB 00:05:34.415 EAL: Trying to obtain current memory policy. 00:05:34.415 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.673 EAL: Restoring previous memory policy: 4 00:05:34.673 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.673 EAL: request: mp_malloc_sync 00:05:34.673 EAL: No shared files mode enabled, IPC is disabled 00:05:34.673 EAL: Heap on socket 0 was expanded by 514MB 00:05:34.673 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.673 EAL: request: mp_malloc_sync 00:05:34.673 EAL: No shared files mode enabled, IPC is disabled 00:05:34.673 EAL: Heap on socket 0 was shrunk by 514MB 00:05:34.673 EAL: Trying to obtain current memory policy. 00:05:34.673 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.237 EAL: Restoring previous memory policy: 4 00:05:35.237 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.237 EAL: request: mp_malloc_sync 00:05:35.237 EAL: No shared files mode enabled, IPC is disabled 00:05:35.237 EAL: Heap on socket 0 was expanded by 1026MB 00:05:35.237 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.495 EAL: request: mp_malloc_sync 00:05:35.495 EAL: No shared files mode enabled, IPC is disabled 00:05:35.495 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:35.495 passed 00:05:35.495 00:05:35.495 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.495 suites 1 1 n/a 0 0 00:05:35.495 tests 2 2 2 0 0 00:05:35.495 asserts 497 497 497 0 n/a 00:05:35.495 00:05:35.495 Elapsed time = 1.430 seconds 00:05:35.495 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.495 EAL: request: mp_malloc_sync 00:05:35.495 EAL: No shared files mode enabled, IPC is disabled 00:05:35.495 EAL: Heap on socket 0 was shrunk by 2MB 00:05:35.495 EAL: No shared files mode enabled, IPC is disabled 00:05:35.495 EAL: No shared files mode enabled, IPC is disabled 00:05:35.495 EAL: No shared files mode enabled, IPC is disabled 00:05:35.495 00:05:35.495 real 0m1.595s 00:05:35.495 user 0m0.917s 00:05:35.495 sys 0m0.645s 00:05:35.495 21:57:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:35.495 21:57:17 -- common/autotest_common.sh@10 -- # set +x 00:05:35.495 ************************************ 00:05:35.495 END TEST env_vtophys 00:05:35.495 ************************************ 00:05:35.495 21:57:17 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:35.495 21:57:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.495 21:57:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.495 21:57:17 -- common/autotest_common.sh@10 -- # set +x 00:05:35.752 ************************************ 00:05:35.752 START TEST env_pci 00:05:35.752 ************************************ 00:05:35.752 21:57:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:35.752 00:05:35.752 00:05:35.752 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.752 http://cunit.sourceforge.net/ 00:05:35.752 00:05:35.752 00:05:35.752 Suite: pci 00:05:35.752 Test: pci_hook ...[2024-04-24 21:57:17.800085] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3832014 has claimed it 00:05:35.752 EAL: Cannot find device (10000:00:01.0) 00:05:35.752 EAL: Failed to attach device on primary process 00:05:35.752 passed 00:05:35.752 00:05:35.752 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.752 suites 1 1 n/a 0 0 00:05:35.752 tests 1 1 1 0 0 00:05:35.752 asserts 25 25 25 0 n/a 00:05:35.752 00:05:35.752 Elapsed time = 0.026 seconds 00:05:35.752 00:05:35.752 real 0m0.041s 00:05:35.752 user 0m0.015s 00:05:35.752 sys 0m0.025s 00:05:35.752 21:57:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:35.752 21:57:17 -- common/autotest_common.sh@10 -- # set +x 00:05:35.752 ************************************ 00:05:35.752 END TEST env_pci 00:05:35.752 ************************************ 00:05:35.752 21:57:17 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:35.752 21:57:17 -- env/env.sh@15 -- # uname 00:05:35.752 21:57:17 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:35.752 21:57:17 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:35.752 21:57:17 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:35.752 21:57:17 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:35.752 21:57:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.752 21:57:17 -- common/autotest_common.sh@10 -- # set +x 00:05:35.752 ************************************ 00:05:35.752 START TEST env_dpdk_post_init 00:05:35.752 ************************************ 00:05:35.752 21:57:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:35.752 EAL: Detected CPU lcores: 48 00:05:35.752 EAL: Detected NUMA nodes: 2 00:05:35.752 EAL: Detected shared linkage of DPDK 00:05:35.752 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:36.010 EAL: Selected IOVA mode 'VA' 00:05:36.010 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.010 EAL: VFIO support initialized 00:05:36.010 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:36.010 EAL: Using IOMMU type 1 (Type 1) 00:05:36.010 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:36.010 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:36.010 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:36.010 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:36.010 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:36.010 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:36.010 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:36.010 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:36.010 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:36.010 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:36.010 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:36.010 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:36.010 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:36.010 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:36.010 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:36.266 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:36.830 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:82:00.0 (socket 1) 00:05:40.106 EAL: Releasing PCI mapped resource for 0000:82:00.0 00:05:40.106 EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x202001040000 00:05:40.106 Starting DPDK initialization... 00:05:40.106 Starting SPDK post initialization... 00:05:40.106 SPDK NVMe probe 00:05:40.106 Attaching to 0000:82:00.0 00:05:40.106 Attached to 0000:82:00.0 00:05:40.106 Cleaning up... 00:05:40.363 00:05:40.363 real 0m4.399s 00:05:40.363 user 0m3.243s 00:05:40.363 sys 0m0.212s 00:05:40.363 21:57:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:40.363 21:57:22 -- common/autotest_common.sh@10 -- # set +x 00:05:40.363 ************************************ 00:05:40.363 END TEST env_dpdk_post_init 00:05:40.363 ************************************ 00:05:40.363 21:57:22 -- env/env.sh@26 -- # uname 00:05:40.363 21:57:22 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:40.363 21:57:22 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:40.363 21:57:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.363 21:57:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.363 21:57:22 -- common/autotest_common.sh@10 -- # set +x 00:05:40.363 ************************************ 00:05:40.363 START TEST env_mem_callbacks 00:05:40.363 ************************************ 00:05:40.363 21:57:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:40.363 EAL: Detected CPU lcores: 48 00:05:40.363 EAL: Detected NUMA nodes: 2 00:05:40.363 EAL: Detected shared linkage of DPDK 00:05:40.363 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:40.363 EAL: Selected IOVA mode 'VA' 00:05:40.363 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.363 EAL: VFIO support initialized 00:05:40.363 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:40.363 00:05:40.363 00:05:40.363 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.363 http://cunit.sourceforge.net/ 00:05:40.363 00:05:40.363 00:05:40.363 Suite: memory 00:05:40.363 Test: test ... 00:05:40.363 register 0x200000200000 2097152 00:05:40.363 malloc 3145728 00:05:40.363 register 0x200000400000 4194304 00:05:40.363 buf 0x200000500000 len 3145728 PASSED 00:05:40.363 malloc 64 00:05:40.363 buf 0x2000004fff40 len 64 PASSED 00:05:40.363 malloc 4194304 00:05:40.363 register 0x200000800000 6291456 00:05:40.363 buf 0x200000a00000 len 4194304 PASSED 00:05:40.363 free 0x200000500000 3145728 00:05:40.363 free 0x2000004fff40 64 00:05:40.363 unregister 0x200000400000 4194304 PASSED 00:05:40.363 free 0x200000a00000 4194304 00:05:40.363 unregister 0x200000800000 6291456 PASSED 00:05:40.363 malloc 8388608 00:05:40.363 register 0x200000400000 10485760 00:05:40.363 buf 0x200000600000 len 8388608 PASSED 00:05:40.363 free 0x200000600000 8388608 00:05:40.363 unregister 0x200000400000 10485760 PASSED 00:05:40.363 passed 00:05:40.363 00:05:40.363 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.363 suites 1 1 n/a 0 0 00:05:40.363 tests 1 1 1 0 0 00:05:40.363 asserts 15 15 15 0 n/a 00:05:40.363 00:05:40.363 Elapsed time = 0.005 seconds 00:05:40.363 00:05:40.363 real 0m0.058s 00:05:40.363 user 0m0.020s 00:05:40.363 sys 0m0.037s 00:05:40.363 21:57:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:40.363 21:57:22 -- common/autotest_common.sh@10 -- # set +x 00:05:40.363 ************************************ 00:05:40.363 END TEST env_mem_callbacks 00:05:40.363 ************************************ 00:05:40.363 00:05:40.363 real 0m7.095s 00:05:40.363 user 0m4.706s 00:05:40.363 sys 0m1.373s 00:05:40.363 21:57:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:40.363 21:57:22 -- common/autotest_common.sh@10 -- # set +x 00:05:40.363 ************************************ 00:05:40.363 END TEST env 00:05:40.363 ************************************ 00:05:40.621 21:57:22 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:40.621 21:57:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.621 21:57:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.621 21:57:22 -- common/autotest_common.sh@10 -- # set +x 00:05:40.621 ************************************ 00:05:40.621 START TEST rpc 00:05:40.621 ************************************ 00:05:40.621 21:57:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:40.621 * Looking for test storage... 00:05:40.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:40.621 21:57:22 -- rpc/rpc.sh@65 -- # spdk_pid=3832691 00:05:40.621 21:57:22 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:40.621 21:57:22 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.621 21:57:22 -- rpc/rpc.sh@67 -- # waitforlisten 3832691 00:05:40.621 21:57:22 -- common/autotest_common.sh@817 -- # '[' -z 3832691 ']' 00:05:40.621 21:57:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.621 21:57:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:40.621 21:57:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.621 21:57:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:40.621 21:57:22 -- common/autotest_common.sh@10 -- # set +x 00:05:40.621 [2024-04-24 21:57:22.875797] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:05:40.621 [2024-04-24 21:57:22.875896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3832691 ] 00:05:40.878 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.878 [2024-04-24 21:57:22.973023] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.878 [2024-04-24 21:57:23.130052] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:40.878 [2024-04-24 21:57:23.130133] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3832691' to capture a snapshot of events at runtime. 00:05:40.878 [2024-04-24 21:57:23.130165] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:40.878 [2024-04-24 21:57:23.130192] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:40.878 [2024-04-24 21:57:23.130214] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3832691 for offline analysis/debug. 00:05:40.878 [2024-04-24 21:57:23.130274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.811 21:57:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:41.811 21:57:23 -- common/autotest_common.sh@850 -- # return 0 00:05:41.811 21:57:23 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:41.811 21:57:23 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:41.811 21:57:23 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:41.811 21:57:23 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:41.811 21:57:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.811 21:57:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.811 21:57:23 -- common/autotest_common.sh@10 -- # set +x 00:05:41.811 ************************************ 00:05:41.811 START TEST rpc_integrity 00:05:41.811 ************************************ 00:05:41.811 21:57:24 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:05:41.811 21:57:24 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:41.811 21:57:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:41.811 21:57:24 -- common/autotest_common.sh@10 -- # set +x 00:05:41.811 21:57:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:41.811 21:57:24 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:41.811 21:57:24 -- rpc/rpc.sh@13 -- # jq length 00:05:41.811 21:57:24 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:41.811 21:57:24 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:41.811 21:57:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:41.811 21:57:24 -- common/autotest_common.sh@10 -- # set +x 00:05:42.069 21:57:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:42.069 21:57:24 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:42.069 21:57:24 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:42.069 21:57:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:42.069 21:57:24 -- common/autotest_common.sh@10 -- # set +x 00:05:42.069 21:57:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:42.069 21:57:24 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:42.069 { 00:05:42.069 "name": "Malloc0", 00:05:42.069 "aliases": [ 00:05:42.069 "e7dc6976-cec0-4c16-ba91-481fe48571a5" 00:05:42.069 ], 00:05:42.069 "product_name": "Malloc disk", 00:05:42.069 "block_size": 512, 00:05:42.069 "num_blocks": 16384, 00:05:42.069 "uuid": "e7dc6976-cec0-4c16-ba91-481fe48571a5", 00:05:42.069 "assigned_rate_limits": { 00:05:42.069 "rw_ios_per_sec": 0, 00:05:42.069 "rw_mbytes_per_sec": 0, 00:05:42.069 "r_mbytes_per_sec": 0, 00:05:42.069 "w_mbytes_per_sec": 0 00:05:42.069 }, 00:05:42.069 "claimed": false, 00:05:42.069 "zoned": false, 00:05:42.069 "supported_io_types": { 00:05:42.069 "read": true, 00:05:42.069 "write": true, 00:05:42.069 "unmap": true, 00:05:42.069 "write_zeroes": true, 00:05:42.069 "flush": true, 00:05:42.069 "reset": true, 00:05:42.069 "compare": false, 00:05:42.069 "compare_and_write": false, 00:05:42.069 "abort": true, 00:05:42.069 "nvme_admin": false, 00:05:42.069 "nvme_io": false 00:05:42.069 }, 00:05:42.069 "memory_domains": [ 00:05:42.069 { 00:05:42.069 "dma_device_id": "system", 00:05:42.069 "dma_device_type": 1 00:05:42.069 }, 00:05:42.069 { 00:05:42.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.069 "dma_device_type": 2 00:05:42.069 } 00:05:42.069 ], 00:05:42.069 "driver_specific": {} 00:05:42.069 } 00:05:42.069 ]' 00:05:42.069 21:57:24 -- rpc/rpc.sh@17 -- # jq length 00:05:42.069 21:57:24 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:42.069 21:57:24 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:42.069 21:57:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:42.069 21:57:24 -- common/autotest_common.sh@10 -- # set +x 00:05:42.069 [2024-04-24 21:57:24.126838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:42.069 [2024-04-24 21:57:24.126889] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.069 [2024-04-24 21:57:24.126914] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x823510 00:05:42.069 [2024-04-24 21:57:24.126929] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.069 [2024-04-24 21:57:24.128384] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.069 [2024-04-24 21:57:24.128427] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:42.069 Passthru0 00:05:42.069 21:57:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:42.069 21:57:24 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:42.069 21:57:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:42.069 21:57:24 -- common/autotest_common.sh@10 -- # set +x 00:05:42.069 21:57:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:42.069 21:57:24 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:42.069 { 00:05:42.069 "name": "Malloc0", 00:05:42.069 "aliases": [ 00:05:42.069 "e7dc6976-cec0-4c16-ba91-481fe48571a5" 00:05:42.069 ], 00:05:42.069 "product_name": "Malloc disk", 00:05:42.069 "block_size": 512, 00:05:42.069 "num_blocks": 16384, 00:05:42.069 "uuid": "e7dc6976-cec0-4c16-ba91-481fe48571a5", 00:05:42.069 "assigned_rate_limits": { 00:05:42.069 "rw_ios_per_sec": 0, 00:05:42.069 "rw_mbytes_per_sec": 0, 00:05:42.069 "r_mbytes_per_sec": 0, 00:05:42.069 "w_mbytes_per_sec": 0 00:05:42.069 }, 00:05:42.069 "claimed": true, 00:05:42.069 "claim_type": "exclusive_write", 00:05:42.069 "zoned": false, 00:05:42.069 "supported_io_types": { 00:05:42.069 "read": true, 00:05:42.069 "write": true, 00:05:42.069 "unmap": true, 00:05:42.069 "write_zeroes": true, 00:05:42.069 "flush": true, 00:05:42.069 "reset": true, 00:05:42.069 "compare": false, 00:05:42.069 "compare_and_write": false, 00:05:42.069 "abort": true, 00:05:42.069 "nvme_admin": false, 00:05:42.069 "nvme_io": false 00:05:42.069 }, 00:05:42.069 "memory_domains": [ 00:05:42.069 { 00:05:42.069 "dma_device_id": "system", 00:05:42.069 "dma_device_type": 1 00:05:42.069 }, 00:05:42.070 { 00:05:42.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.070 "dma_device_type": 2 00:05:42.070 } 00:05:42.070 ], 00:05:42.070 "driver_specific": {} 00:05:42.070 }, 00:05:42.070 { 00:05:42.070 "name": "Passthru0", 00:05:42.070 "aliases": [ 00:05:42.070 "26005640-98da-53f1-9666-9601af8ec24c" 00:05:42.070 ], 00:05:42.070 "product_name": "passthru", 00:05:42.070 "block_size": 512, 00:05:42.070 "num_blocks": 16384, 00:05:42.070 "uuid": "26005640-98da-53f1-9666-9601af8ec24c", 00:05:42.070 "assigned_rate_limits": { 00:05:42.070 "rw_ios_per_sec": 0, 00:05:42.070 "rw_mbytes_per_sec": 0, 00:05:42.070 "r_mbytes_per_sec": 0, 00:05:42.070 "w_mbytes_per_sec": 0 00:05:42.070 }, 00:05:42.070 "claimed": false, 00:05:42.070 "zoned": false, 00:05:42.070 "supported_io_types": { 00:05:42.070 "read": true, 00:05:42.070 "write": true, 00:05:42.070 "unmap": true, 00:05:42.070 "write_zeroes": true, 00:05:42.070 "flush": true, 00:05:42.070 "reset": true, 00:05:42.070 "compare": false, 00:05:42.070 "compare_and_write": false, 00:05:42.070 "abort": true, 00:05:42.070 "nvme_admin": false, 00:05:42.070 "nvme_io": false 00:05:42.070 }, 00:05:42.070 "memory_domains": [ 00:05:42.070 { 00:05:42.070 "dma_device_id": "system", 00:05:42.070 "dma_device_type": 1 00:05:42.070 }, 00:05:42.070 { 00:05:42.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.070 "dma_device_type": 2 00:05:42.070 } 00:05:42.070 ], 00:05:42.070 "driver_specific": { 00:05:42.070 "passthru": { 00:05:42.070 "name": "Passthru0", 00:05:42.070 "base_bdev_name": "Malloc0" 00:05:42.070 } 00:05:42.070 } 00:05:42.070 } 00:05:42.070 ]' 00:05:42.070 21:57:24 -- rpc/rpc.sh@21 -- # jq length 00:05:42.070 21:57:24 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:42.070 21:57:24 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:42.070 21:57:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:42.070 21:57:24 -- common/autotest_common.sh@10 -- # set +x 00:05:42.070 21:57:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:42.070 21:57:24 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:42.070 21:57:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:42.070 21:57:24 -- common/autotest_common.sh@10 -- # set +x 00:05:42.070 21:57:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:42.070 21:57:24 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:42.070 21:57:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:42.070 21:57:24 -- common/autotest_common.sh@10 -- # set +x 00:05:42.070 21:57:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:42.070 21:57:24 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:42.070 21:57:24 -- rpc/rpc.sh@26 -- # jq length 00:05:42.070 21:57:24 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:42.070 00:05:42.070 real 0m0.239s 00:05:42.070 user 0m0.155s 00:05:42.070 sys 0m0.024s 00:05:42.070 21:57:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:42.070 21:57:24 -- common/autotest_common.sh@10 -- # set +x 00:05:42.070 ************************************ 00:05:42.070 END TEST rpc_integrity 00:05:42.070 ************************************ 00:05:42.070 21:57:24 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:42.070 21:57:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.070 21:57:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.070 21:57:24 -- common/autotest_common.sh@10 -- # set +x 00:05:42.327 ************************************ 00:05:42.327 START TEST rpc_plugins 00:05:42.327 ************************************ 00:05:42.327 21:57:24 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:05:42.327 21:57:24 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:42.327 21:57:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:42.327 21:57:24 -- common/autotest_common.sh@10 -- # set +x 00:05:42.327 21:57:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:42.327 21:57:24 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:42.327 21:57:24 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:42.327 21:57:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:42.327 21:57:24 -- common/autotest_common.sh@10 -- # set +x 00:05:42.327 21:57:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:42.327 21:57:24 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:42.327 { 00:05:42.327 "name": "Malloc1", 00:05:42.327 "aliases": [ 00:05:42.327 "c23a0f49-5f76-4f23-9968-c5fbf73fe342" 00:05:42.327 ], 00:05:42.327 "product_name": "Malloc disk", 00:05:42.327 "block_size": 4096, 00:05:42.327 "num_blocks": 256, 00:05:42.327 "uuid": "c23a0f49-5f76-4f23-9968-c5fbf73fe342", 00:05:42.327 "assigned_rate_limits": { 00:05:42.327 "rw_ios_per_sec": 0, 00:05:42.327 "rw_mbytes_per_sec": 0, 00:05:42.327 "r_mbytes_per_sec": 0, 00:05:42.327 "w_mbytes_per_sec": 0 00:05:42.327 }, 00:05:42.327 "claimed": false, 00:05:42.327 "zoned": false, 00:05:42.327 "supported_io_types": { 00:05:42.327 "read": true, 00:05:42.327 "write": true, 00:05:42.327 "unmap": true, 00:05:42.327 "write_zeroes": true, 00:05:42.327 "flush": true, 00:05:42.327 "reset": true, 00:05:42.327 "compare": false, 00:05:42.327 "compare_and_write": false, 00:05:42.327 "abort": true, 00:05:42.327 "nvme_admin": false, 00:05:42.327 "nvme_io": false 00:05:42.327 }, 00:05:42.327 "memory_domains": [ 00:05:42.327 { 00:05:42.327 "dma_device_id": "system", 00:05:42.327 "dma_device_type": 1 00:05:42.327 }, 00:05:42.327 { 00:05:42.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.327 "dma_device_type": 2 00:05:42.327 } 00:05:42.327 ], 00:05:42.327 "driver_specific": {} 00:05:42.327 } 00:05:42.327 ]' 00:05:42.327 21:57:24 -- rpc/rpc.sh@32 -- # jq length 00:05:42.327 21:57:24 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:42.327 21:57:24 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:42.327 21:57:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:42.327 21:57:24 -- common/autotest_common.sh@10 -- # set +x 00:05:42.327 21:57:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:42.327 21:57:24 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:42.327 21:57:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:42.327 21:57:24 -- common/autotest_common.sh@10 -- # set +x 00:05:42.327 21:57:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:42.327 21:57:24 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:42.327 21:57:24 -- rpc/rpc.sh@36 -- # jq length 00:05:42.327 21:57:24 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:42.327 00:05:42.327 real 0m0.120s 00:05:42.327 user 0m0.078s 00:05:42.327 sys 0m0.010s 00:05:42.327 21:57:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:42.327 21:57:24 -- common/autotest_common.sh@10 -- # set +x 00:05:42.327 ************************************ 00:05:42.327 END TEST rpc_plugins 00:05:42.327 ************************************ 00:05:42.327 21:57:24 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:42.327 21:57:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.327 21:57:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.327 21:57:24 -- common/autotest_common.sh@10 -- # set +x 00:05:42.585 ************************************ 00:05:42.585 START TEST rpc_trace_cmd_test 00:05:42.585 ************************************ 00:05:42.585 21:57:24 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:05:42.585 21:57:24 -- rpc/rpc.sh@40 -- # local info 00:05:42.585 21:57:24 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:42.585 21:57:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:42.585 21:57:24 -- common/autotest_common.sh@10 -- # set +x 00:05:42.585 21:57:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:42.585 21:57:24 -- rpc/rpc.sh@42 -- # info='{ 00:05:42.585 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3832691", 00:05:42.585 "tpoint_group_mask": "0x8", 00:05:42.585 "iscsi_conn": { 00:05:42.585 "mask": "0x2", 00:05:42.585 "tpoint_mask": "0x0" 00:05:42.585 }, 00:05:42.585 "scsi": { 00:05:42.585 "mask": "0x4", 00:05:42.585 "tpoint_mask": "0x0" 00:05:42.585 }, 00:05:42.585 "bdev": { 00:05:42.585 "mask": "0x8", 00:05:42.585 "tpoint_mask": "0xffffffffffffffff" 00:05:42.585 }, 00:05:42.585 "nvmf_rdma": { 00:05:42.585 "mask": "0x10", 00:05:42.585 "tpoint_mask": "0x0" 00:05:42.585 }, 00:05:42.585 "nvmf_tcp": { 00:05:42.585 "mask": "0x20", 00:05:42.585 "tpoint_mask": "0x0" 00:05:42.585 }, 00:05:42.585 "ftl": { 00:05:42.585 "mask": "0x40", 00:05:42.585 "tpoint_mask": "0x0" 00:05:42.585 }, 00:05:42.585 "blobfs": { 00:05:42.585 "mask": "0x80", 00:05:42.585 "tpoint_mask": "0x0" 00:05:42.585 }, 00:05:42.585 "dsa": { 00:05:42.585 "mask": "0x200", 00:05:42.585 "tpoint_mask": "0x0" 00:05:42.585 }, 00:05:42.585 "thread": { 00:05:42.585 "mask": "0x400", 00:05:42.585 "tpoint_mask": "0x0" 00:05:42.585 }, 00:05:42.585 "nvme_pcie": { 00:05:42.585 "mask": "0x800", 00:05:42.585 "tpoint_mask": "0x0" 00:05:42.585 }, 00:05:42.585 "iaa": { 00:05:42.585 "mask": "0x1000", 00:05:42.585 "tpoint_mask": "0x0" 00:05:42.585 }, 00:05:42.585 "nvme_tcp": { 00:05:42.585 "mask": "0x2000", 00:05:42.585 "tpoint_mask": "0x0" 00:05:42.585 }, 00:05:42.585 "bdev_nvme": { 00:05:42.585 "mask": "0x4000", 00:05:42.585 "tpoint_mask": "0x0" 00:05:42.585 }, 00:05:42.585 "sock": { 00:05:42.585 "mask": "0x8000", 00:05:42.585 "tpoint_mask": "0x0" 00:05:42.585 } 00:05:42.585 }' 00:05:42.585 21:57:24 -- rpc/rpc.sh@43 -- # jq length 00:05:42.585 21:57:24 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:42.585 21:57:24 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:42.585 21:57:24 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:42.585 21:57:24 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:42.585 21:57:24 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:42.585 21:57:24 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:42.585 21:57:24 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:42.585 21:57:24 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:42.843 21:57:24 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:42.843 00:05:42.843 real 0m0.209s 00:05:42.843 user 0m0.184s 00:05:42.843 sys 0m0.016s 00:05:42.843 21:57:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:42.843 21:57:24 -- common/autotest_common.sh@10 -- # set +x 00:05:42.843 ************************************ 00:05:42.843 END TEST rpc_trace_cmd_test 00:05:42.843 ************************************ 00:05:42.843 21:57:24 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:42.843 21:57:24 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:42.843 21:57:24 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:42.843 21:57:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.843 21:57:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.843 21:57:24 -- common/autotest_common.sh@10 -- # set +x 00:05:42.843 ************************************ 00:05:42.843 START TEST rpc_daemon_integrity 00:05:42.843 ************************************ 00:05:42.843 21:57:24 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:05:42.843 21:57:24 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:42.843 21:57:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:42.843 21:57:24 -- common/autotest_common.sh@10 -- # set +x 00:05:42.843 21:57:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:42.843 21:57:25 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:42.843 21:57:25 -- rpc/rpc.sh@13 -- # jq length 00:05:42.843 21:57:25 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:42.843 21:57:25 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:42.843 21:57:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:42.843 21:57:25 -- common/autotest_common.sh@10 -- # set +x 00:05:42.843 21:57:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:42.843 21:57:25 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:42.843 21:57:25 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:42.843 21:57:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:42.843 21:57:25 -- common/autotest_common.sh@10 -- # set +x 00:05:42.843 21:57:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:42.843 21:57:25 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:42.843 { 00:05:42.843 "name": "Malloc2", 00:05:42.843 "aliases": [ 00:05:42.843 "59d9181c-76b3-43dd-92be-964da13bd8b7" 00:05:42.843 ], 00:05:42.843 "product_name": "Malloc disk", 00:05:42.843 "block_size": 512, 00:05:42.843 "num_blocks": 16384, 00:05:42.843 "uuid": "59d9181c-76b3-43dd-92be-964da13bd8b7", 00:05:42.843 "assigned_rate_limits": { 00:05:42.843 "rw_ios_per_sec": 0, 00:05:42.843 "rw_mbytes_per_sec": 0, 00:05:42.843 "r_mbytes_per_sec": 0, 00:05:42.843 "w_mbytes_per_sec": 0 00:05:42.843 }, 00:05:42.843 "claimed": false, 00:05:42.843 "zoned": false, 00:05:42.843 "supported_io_types": { 00:05:42.843 "read": true, 00:05:42.843 "write": true, 00:05:42.843 "unmap": true, 00:05:42.843 "write_zeroes": true, 00:05:42.843 "flush": true, 00:05:42.843 "reset": true, 00:05:42.843 "compare": false, 00:05:42.843 "compare_and_write": false, 00:05:42.843 "abort": true, 00:05:42.843 "nvme_admin": false, 00:05:42.843 "nvme_io": false 00:05:42.843 }, 00:05:42.843 "memory_domains": [ 00:05:42.843 { 00:05:42.843 "dma_device_id": "system", 00:05:42.843 "dma_device_type": 1 00:05:42.843 }, 00:05:42.843 { 00:05:42.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.843 "dma_device_type": 2 00:05:42.843 } 00:05:42.843 ], 00:05:42.843 "driver_specific": {} 00:05:42.843 } 00:05:42.843 ]' 00:05:42.843 21:57:25 -- rpc/rpc.sh@17 -- # jq length 00:05:43.101 21:57:25 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:43.101 21:57:25 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:43.101 21:57:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:43.101 21:57:25 -- common/autotest_common.sh@10 -- # set +x 00:05:43.101 [2024-04-24 21:57:25.106409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:43.101 [2024-04-24 21:57:25.106461] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:43.101 [2024-04-24 21:57:25.106486] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x826e60 00:05:43.101 [2024-04-24 21:57:25.106501] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:43.101 [2024-04-24 21:57:25.107836] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:43.101 [2024-04-24 21:57:25.107866] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:43.101 Passthru0 00:05:43.101 21:57:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:43.101 21:57:25 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:43.101 21:57:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:43.101 21:57:25 -- common/autotest_common.sh@10 -- # set +x 00:05:43.101 21:57:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:43.101 21:57:25 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:43.101 { 00:05:43.101 "name": "Malloc2", 00:05:43.101 "aliases": [ 00:05:43.101 "59d9181c-76b3-43dd-92be-964da13bd8b7" 00:05:43.101 ], 00:05:43.101 "product_name": "Malloc disk", 00:05:43.101 "block_size": 512, 00:05:43.101 "num_blocks": 16384, 00:05:43.101 "uuid": "59d9181c-76b3-43dd-92be-964da13bd8b7", 00:05:43.101 "assigned_rate_limits": { 00:05:43.101 "rw_ios_per_sec": 0, 00:05:43.101 "rw_mbytes_per_sec": 0, 00:05:43.101 "r_mbytes_per_sec": 0, 00:05:43.101 "w_mbytes_per_sec": 0 00:05:43.101 }, 00:05:43.101 "claimed": true, 00:05:43.101 "claim_type": "exclusive_write", 00:05:43.101 "zoned": false, 00:05:43.101 "supported_io_types": { 00:05:43.101 "read": true, 00:05:43.101 "write": true, 00:05:43.101 "unmap": true, 00:05:43.101 "write_zeroes": true, 00:05:43.101 "flush": true, 00:05:43.101 "reset": true, 00:05:43.101 "compare": false, 00:05:43.101 "compare_and_write": false, 00:05:43.101 "abort": true, 00:05:43.101 "nvme_admin": false, 00:05:43.101 "nvme_io": false 00:05:43.101 }, 00:05:43.101 "memory_domains": [ 00:05:43.101 { 00:05:43.101 "dma_device_id": "system", 00:05:43.101 "dma_device_type": 1 00:05:43.101 }, 00:05:43.101 { 00:05:43.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.101 "dma_device_type": 2 00:05:43.101 } 00:05:43.101 ], 00:05:43.101 "driver_specific": {} 00:05:43.101 }, 00:05:43.101 { 00:05:43.101 "name": "Passthru0", 00:05:43.101 "aliases": [ 00:05:43.101 "3ae2e4ba-6fa3-5d1f-bd65-11212ae5de39" 00:05:43.101 ], 00:05:43.101 "product_name": "passthru", 00:05:43.101 "block_size": 512, 00:05:43.101 "num_blocks": 16384, 00:05:43.101 "uuid": "3ae2e4ba-6fa3-5d1f-bd65-11212ae5de39", 00:05:43.101 "assigned_rate_limits": { 00:05:43.101 "rw_ios_per_sec": 0, 00:05:43.101 "rw_mbytes_per_sec": 0, 00:05:43.101 "r_mbytes_per_sec": 0, 00:05:43.101 "w_mbytes_per_sec": 0 00:05:43.101 }, 00:05:43.101 "claimed": false, 00:05:43.101 "zoned": false, 00:05:43.101 "supported_io_types": { 00:05:43.101 "read": true, 00:05:43.101 "write": true, 00:05:43.101 "unmap": true, 00:05:43.101 "write_zeroes": true, 00:05:43.101 "flush": true, 00:05:43.101 "reset": true, 00:05:43.101 "compare": false, 00:05:43.101 "compare_and_write": false, 00:05:43.101 "abort": true, 00:05:43.101 "nvme_admin": false, 00:05:43.101 "nvme_io": false 00:05:43.101 }, 00:05:43.101 "memory_domains": [ 00:05:43.101 { 00:05:43.102 "dma_device_id": "system", 00:05:43.102 "dma_device_type": 1 00:05:43.102 }, 00:05:43.102 { 00:05:43.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.102 "dma_device_type": 2 00:05:43.102 } 00:05:43.102 ], 00:05:43.102 "driver_specific": { 00:05:43.102 "passthru": { 00:05:43.102 "name": "Passthru0", 00:05:43.102 "base_bdev_name": "Malloc2" 00:05:43.102 } 00:05:43.102 } 00:05:43.102 } 00:05:43.102 ]' 00:05:43.102 21:57:25 -- rpc/rpc.sh@21 -- # jq length 00:05:43.102 21:57:25 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:43.102 21:57:25 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:43.102 21:57:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:43.102 21:57:25 -- common/autotest_common.sh@10 -- # set +x 00:05:43.102 21:57:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:43.102 21:57:25 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:43.102 21:57:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:43.102 21:57:25 -- common/autotest_common.sh@10 -- # set +x 00:05:43.102 21:57:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:43.102 21:57:25 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:43.102 21:57:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:43.102 21:57:25 -- common/autotest_common.sh@10 -- # set +x 00:05:43.102 21:57:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:43.102 21:57:25 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:43.102 21:57:25 -- rpc/rpc.sh@26 -- # jq length 00:05:43.102 21:57:25 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:43.102 00:05:43.102 real 0m0.237s 00:05:43.102 user 0m0.155s 00:05:43.102 sys 0m0.025s 00:05:43.102 21:57:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:43.102 21:57:25 -- common/autotest_common.sh@10 -- # set +x 00:05:43.102 ************************************ 00:05:43.102 END TEST rpc_daemon_integrity 00:05:43.102 ************************************ 00:05:43.102 21:57:25 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:43.102 21:57:25 -- rpc/rpc.sh@84 -- # killprocess 3832691 00:05:43.102 21:57:25 -- common/autotest_common.sh@936 -- # '[' -z 3832691 ']' 00:05:43.102 21:57:25 -- common/autotest_common.sh@940 -- # kill -0 3832691 00:05:43.102 21:57:25 -- common/autotest_common.sh@941 -- # uname 00:05:43.102 21:57:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:43.102 21:57:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3832691 00:05:43.102 21:57:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:43.102 21:57:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:43.102 21:57:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3832691' 00:05:43.102 killing process with pid 3832691 00:05:43.102 21:57:25 -- common/autotest_common.sh@955 -- # kill 3832691 00:05:43.102 21:57:25 -- common/autotest_common.sh@960 -- # wait 3832691 00:05:43.667 00:05:43.667 real 0m2.994s 00:05:43.667 user 0m3.852s 00:05:43.667 sys 0m0.871s 00:05:43.667 21:57:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:43.667 21:57:25 -- common/autotest_common.sh@10 -- # set +x 00:05:43.667 ************************************ 00:05:43.667 END TEST rpc 00:05:43.667 ************************************ 00:05:43.667 21:57:25 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:43.667 21:57:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.667 21:57:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.667 21:57:25 -- common/autotest_common.sh@10 -- # set +x 00:05:43.667 ************************************ 00:05:43.667 START TEST skip_rpc 00:05:43.667 ************************************ 00:05:43.667 21:57:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:43.925 * Looking for test storage... 00:05:43.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:43.925 21:57:25 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:43.925 21:57:25 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:43.925 21:57:25 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:43.925 21:57:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.925 21:57:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.925 21:57:25 -- common/autotest_common.sh@10 -- # set +x 00:05:43.925 ************************************ 00:05:43.925 START TEST skip_rpc 00:05:43.925 ************************************ 00:05:43.925 21:57:26 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:05:43.925 21:57:26 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3833302 00:05:43.925 21:57:26 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:43.925 21:57:26 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.925 21:57:26 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:43.925 [2024-04-24 21:57:26.133084] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:05:43.925 [2024-04-24 21:57:26.133169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3833302 ] 00:05:43.925 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.181 [2024-04-24 21:57:26.204056] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.181 [2024-04-24 21:57:26.323960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.438 21:57:31 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:49.438 21:57:31 -- common/autotest_common.sh@638 -- # local es=0 00:05:49.438 21:57:31 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:49.438 21:57:31 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:49.438 21:57:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:49.438 21:57:31 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:49.438 21:57:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:49.438 21:57:31 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:05:49.438 21:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:49.438 21:57:31 -- common/autotest_common.sh@10 -- # set +x 00:05:49.438 21:57:31 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:49.438 21:57:31 -- common/autotest_common.sh@641 -- # es=1 00:05:49.438 21:57:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:49.438 21:57:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:49.438 21:57:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:49.438 21:57:31 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:49.438 21:57:31 -- rpc/skip_rpc.sh@23 -- # killprocess 3833302 00:05:49.438 21:57:31 -- common/autotest_common.sh@936 -- # '[' -z 3833302 ']' 00:05:49.438 21:57:31 -- common/autotest_common.sh@940 -- # kill -0 3833302 00:05:49.438 21:57:31 -- common/autotest_common.sh@941 -- # uname 00:05:49.438 21:57:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:49.438 21:57:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3833302 00:05:49.438 21:57:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:49.438 21:57:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:49.438 21:57:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3833302' 00:05:49.438 killing process with pid 3833302 00:05:49.438 21:57:31 -- common/autotest_common.sh@955 -- # kill 3833302 00:05:49.438 21:57:31 -- common/autotest_common.sh@960 -- # wait 3833302 00:05:49.438 00:05:49.438 real 0m5.509s 00:05:49.438 user 0m5.172s 00:05:49.438 sys 0m0.341s 00:05:49.438 21:57:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:49.438 21:57:31 -- common/autotest_common.sh@10 -- # set +x 00:05:49.438 ************************************ 00:05:49.438 END TEST skip_rpc 00:05:49.438 ************************************ 00:05:49.438 21:57:31 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:49.438 21:57:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:49.438 21:57:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.438 21:57:31 -- common/autotest_common.sh@10 -- # set +x 00:05:49.696 ************************************ 00:05:49.696 START TEST skip_rpc_with_json 00:05:49.696 ************************************ 00:05:49.696 21:57:31 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:05:49.696 21:57:31 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:49.696 21:57:31 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3833999 00:05:49.696 21:57:31 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.696 21:57:31 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.696 21:57:31 -- rpc/skip_rpc.sh@31 -- # waitforlisten 3833999 00:05:49.696 21:57:31 -- common/autotest_common.sh@817 -- # '[' -z 3833999 ']' 00:05:49.696 21:57:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.696 21:57:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:49.696 21:57:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.696 21:57:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:49.696 21:57:31 -- common/autotest_common.sh@10 -- # set +x 00:05:49.696 [2024-04-24 21:57:31.788373] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:05:49.696 [2024-04-24 21:57:31.788495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3833999 ] 00:05:49.696 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.696 [2024-04-24 21:57:31.860692] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.953 [2024-04-24 21:57:31.978516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.211 21:57:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:50.211 21:57:32 -- common/autotest_common.sh@850 -- # return 0 00:05:50.211 21:57:32 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:50.211 21:57:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:50.211 21:57:32 -- common/autotest_common.sh@10 -- # set +x 00:05:50.211 [2024-04-24 21:57:32.251749] nvmf_rpc.c:2517:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:50.211 request: 00:05:50.211 { 00:05:50.211 "trtype": "tcp", 00:05:50.211 "method": "nvmf_get_transports", 00:05:50.211 "req_id": 1 00:05:50.211 } 00:05:50.211 Got JSON-RPC error response 00:05:50.211 response: 00:05:50.211 { 00:05:50.211 "code": -19, 00:05:50.211 "message": "No such device" 00:05:50.211 } 00:05:50.211 21:57:32 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:50.211 21:57:32 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:50.211 21:57:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:50.211 21:57:32 -- common/autotest_common.sh@10 -- # set +x 00:05:50.211 [2024-04-24 21:57:32.259870] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:50.211 21:57:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:50.211 21:57:32 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:50.211 21:57:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:50.211 21:57:32 -- common/autotest_common.sh@10 -- # set +x 00:05:50.211 21:57:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:50.211 21:57:32 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:50.211 { 00:05:50.211 "subsystems": [ 00:05:50.211 { 00:05:50.211 "subsystem": "vfio_user_target", 00:05:50.211 "config": null 00:05:50.211 }, 00:05:50.211 { 00:05:50.211 "subsystem": "keyring", 00:05:50.211 "config": [] 00:05:50.211 }, 00:05:50.211 { 00:05:50.211 "subsystem": "iobuf", 00:05:50.211 "config": [ 00:05:50.211 { 00:05:50.211 "method": "iobuf_set_options", 00:05:50.211 "params": { 00:05:50.211 "small_pool_count": 8192, 00:05:50.211 "large_pool_count": 1024, 00:05:50.211 "small_bufsize": 8192, 00:05:50.211 "large_bufsize": 135168 00:05:50.211 } 00:05:50.211 } 00:05:50.211 ] 00:05:50.211 }, 00:05:50.211 { 00:05:50.211 "subsystem": "sock", 00:05:50.211 "config": [ 00:05:50.211 { 00:05:50.211 "method": "sock_impl_set_options", 00:05:50.211 "params": { 00:05:50.211 "impl_name": "posix", 00:05:50.211 "recv_buf_size": 2097152, 00:05:50.211 "send_buf_size": 2097152, 00:05:50.211 "enable_recv_pipe": true, 00:05:50.211 "enable_quickack": false, 00:05:50.211 "enable_placement_id": 0, 00:05:50.211 "enable_zerocopy_send_server": true, 00:05:50.211 "enable_zerocopy_send_client": false, 00:05:50.211 "zerocopy_threshold": 0, 00:05:50.211 "tls_version": 0, 00:05:50.211 "enable_ktls": false 00:05:50.211 } 00:05:50.211 }, 00:05:50.211 { 00:05:50.211 "method": "sock_impl_set_options", 00:05:50.211 "params": { 00:05:50.211 "impl_name": "ssl", 00:05:50.211 "recv_buf_size": 4096, 00:05:50.211 "send_buf_size": 4096, 00:05:50.211 "enable_recv_pipe": true, 00:05:50.211 "enable_quickack": false, 00:05:50.211 "enable_placement_id": 0, 00:05:50.211 "enable_zerocopy_send_server": true, 00:05:50.212 "enable_zerocopy_send_client": false, 00:05:50.212 "zerocopy_threshold": 0, 00:05:50.212 "tls_version": 0, 00:05:50.212 "enable_ktls": false 00:05:50.212 } 00:05:50.212 } 00:05:50.212 ] 00:05:50.212 }, 00:05:50.212 { 00:05:50.212 "subsystem": "vmd", 00:05:50.212 "config": [] 00:05:50.212 }, 00:05:50.212 { 00:05:50.212 "subsystem": "accel", 00:05:50.212 "config": [ 00:05:50.212 { 00:05:50.212 "method": "accel_set_options", 00:05:50.212 "params": { 00:05:50.212 "small_cache_size": 128, 00:05:50.212 "large_cache_size": 16, 00:05:50.212 "task_count": 2048, 00:05:50.212 "sequence_count": 2048, 00:05:50.212 "buf_count": 2048 00:05:50.212 } 00:05:50.212 } 00:05:50.212 ] 00:05:50.212 }, 00:05:50.212 { 00:05:50.212 "subsystem": "bdev", 00:05:50.212 "config": [ 00:05:50.212 { 00:05:50.212 "method": "bdev_set_options", 00:05:50.212 "params": { 00:05:50.212 "bdev_io_pool_size": 65535, 00:05:50.212 "bdev_io_cache_size": 256, 00:05:50.212 "bdev_auto_examine": true, 00:05:50.212 "iobuf_small_cache_size": 128, 00:05:50.212 "iobuf_large_cache_size": 16 00:05:50.212 } 00:05:50.212 }, 00:05:50.212 { 00:05:50.212 "method": "bdev_raid_set_options", 00:05:50.212 "params": { 00:05:50.212 "process_window_size_kb": 1024 00:05:50.212 } 00:05:50.212 }, 00:05:50.212 { 00:05:50.212 "method": "bdev_iscsi_set_options", 00:05:50.212 "params": { 00:05:50.212 "timeout_sec": 30 00:05:50.212 } 00:05:50.212 }, 00:05:50.212 { 00:05:50.212 "method": "bdev_nvme_set_options", 00:05:50.212 "params": { 00:05:50.212 "action_on_timeout": "none", 00:05:50.212 "timeout_us": 0, 00:05:50.212 "timeout_admin_us": 0, 00:05:50.212 "keep_alive_timeout_ms": 10000, 00:05:50.212 "arbitration_burst": 0, 00:05:50.212 "low_priority_weight": 0, 00:05:50.212 "medium_priority_weight": 0, 00:05:50.212 "high_priority_weight": 0, 00:05:50.212 "nvme_adminq_poll_period_us": 10000, 00:05:50.212 "nvme_ioq_poll_period_us": 0, 00:05:50.212 "io_queue_requests": 0, 00:05:50.212 "delay_cmd_submit": true, 00:05:50.212 "transport_retry_count": 4, 00:05:50.212 "bdev_retry_count": 3, 00:05:50.212 "transport_ack_timeout": 0, 00:05:50.212 "ctrlr_loss_timeout_sec": 0, 00:05:50.212 "reconnect_delay_sec": 0, 00:05:50.212 "fast_io_fail_timeout_sec": 0, 00:05:50.212 "disable_auto_failback": false, 00:05:50.212 "generate_uuids": false, 00:05:50.212 "transport_tos": 0, 00:05:50.212 "nvme_error_stat": false, 00:05:50.212 "rdma_srq_size": 0, 00:05:50.212 "io_path_stat": false, 00:05:50.212 "allow_accel_sequence": false, 00:05:50.212 "rdma_max_cq_size": 0, 00:05:50.212 "rdma_cm_event_timeout_ms": 0, 00:05:50.212 "dhchap_digests": [ 00:05:50.212 "sha256", 00:05:50.212 "sha384", 00:05:50.212 "sha512" 00:05:50.212 ], 00:05:50.212 "dhchap_dhgroups": [ 00:05:50.212 "null", 00:05:50.212 "ffdhe2048", 00:05:50.212 "ffdhe3072", 00:05:50.212 "ffdhe4096", 00:05:50.212 "ffdhe6144", 00:05:50.212 "ffdhe8192" 00:05:50.212 ] 00:05:50.212 } 00:05:50.212 }, 00:05:50.212 { 00:05:50.212 "method": "bdev_nvme_set_hotplug", 00:05:50.212 "params": { 00:05:50.212 "period_us": 100000, 00:05:50.212 "enable": false 00:05:50.212 } 00:05:50.212 }, 00:05:50.212 { 00:05:50.212 "method": "bdev_wait_for_examine" 00:05:50.212 } 00:05:50.212 ] 00:05:50.212 }, 00:05:50.212 { 00:05:50.212 "subsystem": "scsi", 00:05:50.212 "config": null 00:05:50.212 }, 00:05:50.212 { 00:05:50.212 "subsystem": "scheduler", 00:05:50.212 "config": [ 00:05:50.212 { 00:05:50.212 "method": "framework_set_scheduler", 00:05:50.212 "params": { 00:05:50.212 "name": "static" 00:05:50.212 } 00:05:50.212 } 00:05:50.212 ] 00:05:50.212 }, 00:05:50.212 { 00:05:50.212 "subsystem": "vhost_scsi", 00:05:50.212 "config": [] 00:05:50.212 }, 00:05:50.212 { 00:05:50.212 "subsystem": "vhost_blk", 00:05:50.212 "config": [] 00:05:50.212 }, 00:05:50.212 { 00:05:50.212 "subsystem": "ublk", 00:05:50.212 "config": [] 00:05:50.212 }, 00:05:50.212 { 00:05:50.212 "subsystem": "nbd", 00:05:50.212 "config": [] 00:05:50.212 }, 00:05:50.212 { 00:05:50.212 "subsystem": "nvmf", 00:05:50.212 "config": [ 00:05:50.212 { 00:05:50.212 "method": "nvmf_set_config", 00:05:50.212 "params": { 00:05:50.212 "discovery_filter": "match_any", 00:05:50.212 "admin_cmd_passthru": { 00:05:50.212 "identify_ctrlr": false 00:05:50.212 } 00:05:50.212 } 00:05:50.212 }, 00:05:50.212 { 00:05:50.212 "method": "nvmf_set_max_subsystems", 00:05:50.212 "params": { 00:05:50.212 "max_subsystems": 1024 00:05:50.212 } 00:05:50.212 }, 00:05:50.212 { 00:05:50.212 "method": "nvmf_set_crdt", 00:05:50.212 "params": { 00:05:50.212 "crdt1": 0, 00:05:50.212 "crdt2": 0, 00:05:50.212 "crdt3": 0 00:05:50.212 } 00:05:50.212 }, 00:05:50.212 { 00:05:50.212 "method": "nvmf_create_transport", 00:05:50.212 "params": { 00:05:50.212 "trtype": "TCP", 00:05:50.212 "max_queue_depth": 128, 00:05:50.212 "max_io_qpairs_per_ctrlr": 127, 00:05:50.212 "in_capsule_data_size": 4096, 00:05:50.212 "max_io_size": 131072, 00:05:50.212 "io_unit_size": 131072, 00:05:50.212 "max_aq_depth": 128, 00:05:50.212 "num_shared_buffers": 511, 00:05:50.212 "buf_cache_size": 4294967295, 00:05:50.212 "dif_insert_or_strip": false, 00:05:50.212 "zcopy": false, 00:05:50.212 "c2h_success": true, 00:05:50.212 "sock_priority": 0, 00:05:50.212 "abort_timeout_sec": 1, 00:05:50.212 "ack_timeout": 0 00:05:50.212 } 00:05:50.212 } 00:05:50.212 ] 00:05:50.212 }, 00:05:50.212 { 00:05:50.212 "subsystem": "iscsi", 00:05:50.212 "config": [ 00:05:50.212 { 00:05:50.212 "method": "iscsi_set_options", 00:05:50.212 "params": { 00:05:50.212 "node_base": "iqn.2016-06.io.spdk", 00:05:50.212 "max_sessions": 128, 00:05:50.212 "max_connections_per_session": 2, 00:05:50.212 "max_queue_depth": 64, 00:05:50.212 "default_time2wait": 2, 00:05:50.212 "default_time2retain": 20, 00:05:50.212 "first_burst_length": 8192, 00:05:50.212 "immediate_data": true, 00:05:50.212 "allow_duplicated_isid": false, 00:05:50.212 "error_recovery_level": 0, 00:05:50.212 "nop_timeout": 60, 00:05:50.212 "nop_in_interval": 30, 00:05:50.212 "disable_chap": false, 00:05:50.212 "require_chap": false, 00:05:50.212 "mutual_chap": false, 00:05:50.212 "chap_group": 0, 00:05:50.212 "max_large_datain_per_connection": 64, 00:05:50.212 "max_r2t_per_connection": 4, 00:05:50.212 "pdu_pool_size": 36864, 00:05:50.212 "immediate_data_pool_size": 16384, 00:05:50.212 "data_out_pool_size": 2048 00:05:50.212 } 00:05:50.212 } 00:05:50.212 ] 00:05:50.212 } 00:05:50.212 ] 00:05:50.212 } 00:05:50.212 21:57:32 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:50.212 21:57:32 -- rpc/skip_rpc.sh@40 -- # killprocess 3833999 00:05:50.212 21:57:32 -- common/autotest_common.sh@936 -- # '[' -z 3833999 ']' 00:05:50.212 21:57:32 -- common/autotest_common.sh@940 -- # kill -0 3833999 00:05:50.212 21:57:32 -- common/autotest_common.sh@941 -- # uname 00:05:50.212 21:57:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:50.212 21:57:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3833999 00:05:50.212 21:57:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:50.212 21:57:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:50.212 21:57:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3833999' 00:05:50.212 killing process with pid 3833999 00:05:50.212 21:57:32 -- common/autotest_common.sh@955 -- # kill 3833999 00:05:50.212 21:57:32 -- common/autotest_common.sh@960 -- # wait 3833999 00:05:50.777 21:57:32 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3834141 00:05:50.777 21:57:32 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:50.777 21:57:32 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:56.050 21:57:37 -- rpc/skip_rpc.sh@50 -- # killprocess 3834141 00:05:56.050 21:57:37 -- common/autotest_common.sh@936 -- # '[' -z 3834141 ']' 00:05:56.050 21:57:37 -- common/autotest_common.sh@940 -- # kill -0 3834141 00:05:56.050 21:57:37 -- common/autotest_common.sh@941 -- # uname 00:05:56.050 21:57:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:56.050 21:57:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3834141 00:05:56.050 21:57:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:56.050 21:57:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:56.050 21:57:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3834141' 00:05:56.050 killing process with pid 3834141 00:05:56.050 21:57:37 -- common/autotest_common.sh@955 -- # kill 3834141 00:05:56.050 21:57:37 -- common/autotest_common.sh@960 -- # wait 3834141 00:05:56.308 21:57:38 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:56.308 21:57:38 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:56.308 00:05:56.308 real 0m6.693s 00:05:56.308 user 0m6.321s 00:05:56.308 sys 0m0.742s 00:05:56.308 21:57:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:56.308 21:57:38 -- common/autotest_common.sh@10 -- # set +x 00:05:56.308 ************************************ 00:05:56.308 END TEST skip_rpc_with_json 00:05:56.308 ************************************ 00:05:56.308 21:57:38 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:56.308 21:57:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:56.308 21:57:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.308 21:57:38 -- common/autotest_common.sh@10 -- # set +x 00:05:56.308 ************************************ 00:05:56.308 START TEST skip_rpc_with_delay 00:05:56.308 ************************************ 00:05:56.308 21:57:38 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:05:56.308 21:57:38 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:56.308 21:57:38 -- common/autotest_common.sh@638 -- # local es=0 00:05:56.308 21:57:38 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:56.308 21:57:38 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.308 21:57:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:56.308 21:57:38 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.308 21:57:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:56.308 21:57:38 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.308 21:57:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:56.308 21:57:38 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.308 21:57:38 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:56.308 21:57:38 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:56.621 [2024-04-24 21:57:38.610366] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:56.621 [2024-04-24 21:57:38.610528] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:56.621 21:57:38 -- common/autotest_common.sh@641 -- # es=1 00:05:56.621 21:57:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:56.621 21:57:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:56.621 21:57:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:56.621 00:05:56.621 real 0m0.078s 00:05:56.621 user 0m0.052s 00:05:56.621 sys 0m0.026s 00:05:56.621 21:57:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:56.621 21:57:38 -- common/autotest_common.sh@10 -- # set +x 00:05:56.621 ************************************ 00:05:56.621 END TEST skip_rpc_with_delay 00:05:56.621 ************************************ 00:05:56.621 21:57:38 -- rpc/skip_rpc.sh@77 -- # uname 00:05:56.621 21:57:38 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:56.621 21:57:38 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:56.621 21:57:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:56.621 21:57:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.621 21:57:38 -- common/autotest_common.sh@10 -- # set +x 00:05:56.621 ************************************ 00:05:56.621 START TEST exit_on_failed_rpc_init 00:05:56.621 ************************************ 00:05:56.621 21:57:38 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:05:56.621 21:57:38 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3834871 00:05:56.621 21:57:38 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.621 21:57:38 -- rpc/skip_rpc.sh@63 -- # waitforlisten 3834871 00:05:56.621 21:57:38 -- common/autotest_common.sh@817 -- # '[' -z 3834871 ']' 00:05:56.621 21:57:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.621 21:57:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:56.621 21:57:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.621 21:57:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:56.621 21:57:38 -- common/autotest_common.sh@10 -- # set +x 00:05:56.621 [2024-04-24 21:57:38.828523] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:05:56.621 [2024-04-24 21:57:38.828618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3834871 ] 00:05:56.898 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.898 [2024-04-24 21:57:38.896590] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.898 [2024-04-24 21:57:39.019618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.155 21:57:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:57.155 21:57:39 -- common/autotest_common.sh@850 -- # return 0 00:05:57.155 21:57:39 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.155 21:57:39 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:57.155 21:57:39 -- common/autotest_common.sh@638 -- # local es=0 00:05:57.155 21:57:39 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:57.155 21:57:39 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.156 21:57:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:57.156 21:57:39 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.156 21:57:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:57.156 21:57:39 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.156 21:57:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:57.156 21:57:39 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.156 21:57:39 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:57.156 21:57:39 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:57.156 [2024-04-24 21:57:39.352444] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:05:57.156 [2024-04-24 21:57:39.352529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3834892 ] 00:05:57.156 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.413 [2024-04-24 21:57:39.421344] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.413 [2024-04-24 21:57:39.541275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.413 [2024-04-24 21:57:39.541392] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:57.413 [2024-04-24 21:57:39.541421] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:57.413 [2024-04-24 21:57:39.541435] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:57.671 21:57:39 -- common/autotest_common.sh@641 -- # es=234 00:05:57.671 21:57:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:57.671 21:57:39 -- common/autotest_common.sh@650 -- # es=106 00:05:57.671 21:57:39 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:57.671 21:57:39 -- common/autotest_common.sh@658 -- # es=1 00:05:57.671 21:57:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:57.671 21:57:39 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:57.671 21:57:39 -- rpc/skip_rpc.sh@70 -- # killprocess 3834871 00:05:57.671 21:57:39 -- common/autotest_common.sh@936 -- # '[' -z 3834871 ']' 00:05:57.671 21:57:39 -- common/autotest_common.sh@940 -- # kill -0 3834871 00:05:57.671 21:57:39 -- common/autotest_common.sh@941 -- # uname 00:05:57.671 21:57:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:57.671 21:57:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3834871 00:05:57.671 21:57:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:57.671 21:57:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:57.671 21:57:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3834871' 00:05:57.671 killing process with pid 3834871 00:05:57.671 21:57:39 -- common/autotest_common.sh@955 -- # kill 3834871 00:05:57.671 21:57:39 -- common/autotest_common.sh@960 -- # wait 3834871 00:05:58.237 00:05:58.237 real 0m1.419s 00:05:58.237 user 0m1.627s 00:05:58.237 sys 0m0.470s 00:05:58.237 21:57:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:58.237 21:57:40 -- common/autotest_common.sh@10 -- # set +x 00:05:58.237 ************************************ 00:05:58.237 END TEST exit_on_failed_rpc_init 00:05:58.237 ************************************ 00:05:58.237 21:57:40 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:58.237 00:05:58.237 real 0m14.339s 00:05:58.237 user 0m13.435s 00:05:58.237 sys 0m1.927s 00:05:58.237 21:57:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:58.237 21:57:40 -- common/autotest_common.sh@10 -- # set +x 00:05:58.237 ************************************ 00:05:58.237 END TEST skip_rpc 00:05:58.237 ************************************ 00:05:58.237 21:57:40 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:58.237 21:57:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.237 21:57:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.237 21:57:40 -- common/autotest_common.sh@10 -- # set +x 00:05:58.237 ************************************ 00:05:58.237 START TEST rpc_client 00:05:58.237 ************************************ 00:05:58.237 21:57:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:58.237 * Looking for test storage... 00:05:58.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:58.237 21:57:40 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:58.237 OK 00:05:58.237 21:57:40 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:58.237 00:05:58.237 real 0m0.082s 00:05:58.237 user 0m0.035s 00:05:58.237 sys 0m0.052s 00:05:58.237 21:57:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:58.237 21:57:40 -- common/autotest_common.sh@10 -- # set +x 00:05:58.237 ************************************ 00:05:58.237 END TEST rpc_client 00:05:58.237 ************************************ 00:05:58.237 21:57:40 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:58.237 21:57:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.237 21:57:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.237 21:57:40 -- common/autotest_common.sh@10 -- # set +x 00:05:58.496 ************************************ 00:05:58.496 START TEST json_config 00:05:58.496 ************************************ 00:05:58.496 21:57:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:58.496 21:57:40 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:58.496 21:57:40 -- nvmf/common.sh@7 -- # uname -s 00:05:58.496 21:57:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:58.496 21:57:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:58.497 21:57:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:58.497 21:57:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:58.497 21:57:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:58.497 21:57:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:58.497 21:57:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:58.497 21:57:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:58.497 21:57:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:58.497 21:57:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:58.497 21:57:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:05:58.497 21:57:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:05:58.497 21:57:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:58.497 21:57:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:58.497 21:57:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:58.497 21:57:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:58.497 21:57:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:58.497 21:57:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.497 21:57:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.497 21:57:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.497 21:57:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.497 21:57:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.497 21:57:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.497 21:57:40 -- paths/export.sh@5 -- # export PATH 00:05:58.497 21:57:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.497 21:57:40 -- nvmf/common.sh@47 -- # : 0 00:05:58.497 21:57:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:58.497 21:57:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:58.497 21:57:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:58.497 21:57:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:58.497 21:57:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:58.497 21:57:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:58.497 21:57:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:58.497 21:57:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:58.497 21:57:40 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:58.497 21:57:40 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:58.497 21:57:40 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:58.497 21:57:40 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:58.497 21:57:40 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:58.497 21:57:40 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:58.497 21:57:40 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:58.497 21:57:40 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:58.497 21:57:40 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:58.497 21:57:40 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:58.497 21:57:40 -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:58.497 21:57:40 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:58.497 21:57:40 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:58.497 21:57:40 -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:58.497 21:57:40 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:58.497 21:57:40 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:58.497 INFO: JSON configuration test init 00:05:58.497 21:57:40 -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:58.497 21:57:40 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:58.497 21:57:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:58.497 21:57:40 -- common/autotest_common.sh@10 -- # set +x 00:05:58.497 21:57:40 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:58.497 21:57:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:58.497 21:57:40 -- common/autotest_common.sh@10 -- # set +x 00:05:58.497 21:57:40 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:58.497 21:57:40 -- json_config/common.sh@9 -- # local app=target 00:05:58.497 21:57:40 -- json_config/common.sh@10 -- # shift 00:05:58.497 21:57:40 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:58.497 21:57:40 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:58.497 21:57:40 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:58.497 21:57:40 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:58.497 21:57:40 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:58.497 21:57:40 -- json_config/common.sh@22 -- # app_pid["$app"]=3835256 00:05:58.497 21:57:40 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:58.497 21:57:40 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:58.497 Waiting for target to run... 00:05:58.497 21:57:40 -- json_config/common.sh@25 -- # waitforlisten 3835256 /var/tmp/spdk_tgt.sock 00:05:58.497 21:57:40 -- common/autotest_common.sh@817 -- # '[' -z 3835256 ']' 00:05:58.497 21:57:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:58.497 21:57:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:58.497 21:57:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:58.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:58.497 21:57:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:58.497 21:57:40 -- common/autotest_common.sh@10 -- # set +x 00:05:58.497 [2024-04-24 21:57:40.722565] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:05:58.497 [2024-04-24 21:57:40.722673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3835256 ] 00:05:58.755 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.013 [2024-04-24 21:57:41.105066] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.013 [2024-04-24 21:57:41.195847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.579 21:57:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:59.579 21:57:41 -- common/autotest_common.sh@850 -- # return 0 00:05:59.579 21:57:41 -- json_config/common.sh@26 -- # echo '' 00:05:59.579 00:05:59.579 21:57:41 -- json_config/json_config.sh@269 -- # create_accel_config 00:05:59.579 21:57:41 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:59.579 21:57:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:59.579 21:57:41 -- common/autotest_common.sh@10 -- # set +x 00:05:59.579 21:57:41 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:59.579 21:57:41 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:59.579 21:57:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:59.579 21:57:41 -- common/autotest_common.sh@10 -- # set +x 00:05:59.579 21:57:41 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:59.579 21:57:41 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:59.579 21:57:41 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:02.865 21:57:45 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:02.865 21:57:45 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:02.865 21:57:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:02.865 21:57:45 -- common/autotest_common.sh@10 -- # set +x 00:06:02.865 21:57:45 -- json_config/json_config.sh@45 -- # local ret=0 00:06:02.865 21:57:45 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:02.865 21:57:45 -- json_config/json_config.sh@46 -- # local enabled_types 00:06:02.865 21:57:45 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:02.865 21:57:45 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:02.865 21:57:45 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:03.430 21:57:45 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:03.430 21:57:45 -- json_config/json_config.sh@48 -- # local get_types 00:06:03.430 21:57:45 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:03.430 21:57:45 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:03.430 21:57:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:03.430 21:57:45 -- common/autotest_common.sh@10 -- # set +x 00:06:03.430 21:57:45 -- json_config/json_config.sh@55 -- # return 0 00:06:03.430 21:57:45 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:03.430 21:57:45 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:03.430 21:57:45 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:03.430 21:57:45 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:03.430 21:57:45 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:03.430 21:57:45 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:03.430 21:57:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:03.430 21:57:45 -- common/autotest_common.sh@10 -- # set +x 00:06:03.430 21:57:45 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:03.430 21:57:45 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:03.430 21:57:45 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:03.430 21:57:45 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:03.430 21:57:45 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:03.688 MallocForNvmf0 00:06:03.688 21:57:45 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:03.688 21:57:45 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:04.252 MallocForNvmf1 00:06:04.252 21:57:46 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:04.252 21:57:46 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:04.817 [2024-04-24 21:57:46.975702] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:04.817 21:57:46 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:04.817 21:57:46 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:05.382 21:57:47 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:05.382 21:57:47 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:05.948 21:57:48 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:05.948 21:57:48 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:06.513 21:57:48 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:06.513 21:57:48 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:07.079 [2024-04-24 21:57:49.129809] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:07.079 [2024-04-24 21:57:49.130454] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:07.079 21:57:49 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:07.079 21:57:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:07.079 21:57:49 -- common/autotest_common.sh@10 -- # set +x 00:06:07.079 21:57:49 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:07.079 21:57:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:07.079 21:57:49 -- common/autotest_common.sh@10 -- # set +x 00:06:07.079 21:57:49 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:07.079 21:57:49 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:07.079 21:57:49 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:07.337 MallocBdevForConfigChangeCheck 00:06:07.337 21:57:49 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:07.337 21:57:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:07.337 21:57:49 -- common/autotest_common.sh@10 -- # set +x 00:06:07.337 21:57:49 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:07.337 21:57:49 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:07.902 21:57:49 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:07.902 INFO: shutting down applications... 00:06:07.902 21:57:49 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:07.902 21:57:49 -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:07.902 21:57:49 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:07.902 21:57:49 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:09.801 Calling clear_iscsi_subsystem 00:06:09.801 Calling clear_nvmf_subsystem 00:06:09.801 Calling clear_nbd_subsystem 00:06:09.801 Calling clear_ublk_subsystem 00:06:09.801 Calling clear_vhost_blk_subsystem 00:06:09.801 Calling clear_vhost_scsi_subsystem 00:06:09.801 Calling clear_bdev_subsystem 00:06:09.801 21:57:51 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:09.801 21:57:51 -- json_config/json_config.sh@343 -- # count=100 00:06:09.801 21:57:51 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:09.801 21:57:51 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:09.801 21:57:51 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:09.801 21:57:51 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:10.058 21:57:52 -- json_config/json_config.sh@345 -- # break 00:06:10.058 21:57:52 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:10.058 21:57:52 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:10.058 21:57:52 -- json_config/common.sh@31 -- # local app=target 00:06:10.058 21:57:52 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:10.058 21:57:52 -- json_config/common.sh@35 -- # [[ -n 3835256 ]] 00:06:10.058 21:57:52 -- json_config/common.sh@38 -- # kill -SIGINT 3835256 00:06:10.058 [2024-04-24 21:57:52.309441] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:10.058 21:57:52 -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:10.058 21:57:52 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.058 21:57:52 -- json_config/common.sh@41 -- # kill -0 3835256 00:06:10.058 21:57:52 -- json_config/common.sh@45 -- # sleep 0.5 00:06:10.624 21:57:52 -- json_config/common.sh@40 -- # (( i++ )) 00:06:10.624 21:57:52 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.624 21:57:52 -- json_config/common.sh@41 -- # kill -0 3835256 00:06:10.624 21:57:52 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:10.624 21:57:52 -- json_config/common.sh@43 -- # break 00:06:10.624 21:57:52 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:10.624 21:57:52 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:10.624 SPDK target shutdown done 00:06:10.624 21:57:52 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:10.624 INFO: relaunching applications... 00:06:10.624 21:57:52 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.624 21:57:52 -- json_config/common.sh@9 -- # local app=target 00:06:10.624 21:57:52 -- json_config/common.sh@10 -- # shift 00:06:10.624 21:57:52 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:10.624 21:57:52 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:10.624 21:57:52 -- json_config/common.sh@15 -- # local app_extra_params= 00:06:10.624 21:57:52 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.624 21:57:52 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.624 21:57:52 -- json_config/common.sh@22 -- # app_pid["$app"]=3836714 00:06:10.624 21:57:52 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.624 21:57:52 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:10.624 Waiting for target to run... 00:06:10.624 21:57:52 -- json_config/common.sh@25 -- # waitforlisten 3836714 /var/tmp/spdk_tgt.sock 00:06:10.624 21:57:52 -- common/autotest_common.sh@817 -- # '[' -z 3836714 ']' 00:06:10.624 21:57:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:10.624 21:57:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:10.624 21:57:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:10.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:10.624 21:57:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:10.624 21:57:52 -- common/autotest_common.sh@10 -- # set +x 00:06:10.882 [2024-04-24 21:57:52.897108] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:06:10.882 [2024-04-24 21:57:52.897211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3836714 ] 00:06:10.882 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.447 [2024-04-24 21:57:53.535834] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.447 [2024-04-24 21:57:53.641986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.726 [2024-04-24 21:57:56.682329] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:14.726 [2024-04-24 21:57:56.714292] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:14.726 [2024-04-24 21:57:56.714893] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:14.726 21:57:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:14.726 21:57:56 -- common/autotest_common.sh@850 -- # return 0 00:06:14.726 21:57:56 -- json_config/common.sh@26 -- # echo '' 00:06:14.726 00:06:14.726 21:57:56 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:14.726 21:57:56 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:14.726 INFO: Checking if target configuration is the same... 00:06:14.726 21:57:56 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:14.726 21:57:56 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:14.726 21:57:56 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:14.726 + '[' 2 -ne 2 ']' 00:06:14.726 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:14.726 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:14.726 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:14.726 +++ basename /dev/fd/62 00:06:14.726 ++ mktemp /tmp/62.XXX 00:06:14.726 + tmp_file_1=/tmp/62.EgX 00:06:14.726 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:14.726 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:14.726 + tmp_file_2=/tmp/spdk_tgt_config.json.BiD 00:06:14.726 + ret=0 00:06:14.726 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:14.984 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:14.984 + diff -u /tmp/62.EgX /tmp/spdk_tgt_config.json.BiD 00:06:14.984 + echo 'INFO: JSON config files are the same' 00:06:14.984 INFO: JSON config files are the same 00:06:14.984 + rm /tmp/62.EgX /tmp/spdk_tgt_config.json.BiD 00:06:14.984 + exit 0 00:06:14.984 21:57:57 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:14.984 21:57:57 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:14.984 INFO: changing configuration and checking if this can be detected... 00:06:14.984 21:57:57 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:14.984 21:57:57 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:15.551 21:57:57 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:15.551 21:57:57 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:15.551 21:57:57 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:15.551 + '[' 2 -ne 2 ']' 00:06:15.551 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:15.551 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:15.551 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:15.551 +++ basename /dev/fd/62 00:06:15.551 ++ mktemp /tmp/62.XXX 00:06:15.551 + tmp_file_1=/tmp/62.kHd 00:06:15.551 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:15.551 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:15.551 + tmp_file_2=/tmp/spdk_tgt_config.json.RYz 00:06:15.551 + ret=0 00:06:15.551 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:16.116 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:16.116 + diff -u /tmp/62.kHd /tmp/spdk_tgt_config.json.RYz 00:06:16.116 + ret=1 00:06:16.116 + echo '=== Start of file: /tmp/62.kHd ===' 00:06:16.116 + cat /tmp/62.kHd 00:06:16.116 + echo '=== End of file: /tmp/62.kHd ===' 00:06:16.116 + echo '' 00:06:16.116 + echo '=== Start of file: /tmp/spdk_tgt_config.json.RYz ===' 00:06:16.116 + cat /tmp/spdk_tgt_config.json.RYz 00:06:16.116 + echo '=== End of file: /tmp/spdk_tgt_config.json.RYz ===' 00:06:16.116 + echo '' 00:06:16.116 + rm /tmp/62.kHd /tmp/spdk_tgt_config.json.RYz 00:06:16.116 + exit 1 00:06:16.116 21:57:58 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:16.116 INFO: configuration change detected. 00:06:16.116 21:57:58 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:16.116 21:57:58 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:16.116 21:57:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:16.116 21:57:58 -- common/autotest_common.sh@10 -- # set +x 00:06:16.116 21:57:58 -- json_config/json_config.sh@307 -- # local ret=0 00:06:16.116 21:57:58 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:16.116 21:57:58 -- json_config/json_config.sh@317 -- # [[ -n 3836714 ]] 00:06:16.116 21:57:58 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:16.116 21:57:58 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:16.116 21:57:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:16.116 21:57:58 -- common/autotest_common.sh@10 -- # set +x 00:06:16.116 21:57:58 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:16.116 21:57:58 -- json_config/json_config.sh@193 -- # uname -s 00:06:16.116 21:57:58 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:16.116 21:57:58 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:16.116 21:57:58 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:16.116 21:57:58 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:16.116 21:57:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:16.116 21:57:58 -- common/autotest_common.sh@10 -- # set +x 00:06:16.116 21:57:58 -- json_config/json_config.sh@323 -- # killprocess 3836714 00:06:16.116 21:57:58 -- common/autotest_common.sh@936 -- # '[' -z 3836714 ']' 00:06:16.116 21:57:58 -- common/autotest_common.sh@940 -- # kill -0 3836714 00:06:16.116 21:57:58 -- common/autotest_common.sh@941 -- # uname 00:06:16.116 21:57:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:16.116 21:57:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3836714 00:06:16.116 21:57:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:16.116 21:57:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:16.116 21:57:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3836714' 00:06:16.116 killing process with pid 3836714 00:06:16.116 21:57:58 -- common/autotest_common.sh@955 -- # kill 3836714 00:06:16.116 [2024-04-24 21:57:58.248277] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:16.116 21:57:58 -- common/autotest_common.sh@960 -- # wait 3836714 00:06:18.018 21:57:59 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:18.018 21:57:59 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:18.018 21:57:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:18.018 21:57:59 -- common/autotest_common.sh@10 -- # set +x 00:06:18.018 21:57:59 -- json_config/json_config.sh@328 -- # return 0 00:06:18.018 21:57:59 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:18.018 INFO: Success 00:06:18.018 00:06:18.018 real 0m19.335s 00:06:18.018 user 0m24.226s 00:06:18.018 sys 0m2.599s 00:06:18.018 21:57:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:18.018 21:57:59 -- common/autotest_common.sh@10 -- # set +x 00:06:18.018 ************************************ 00:06:18.018 END TEST json_config 00:06:18.018 ************************************ 00:06:18.018 21:57:59 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:18.018 21:57:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:18.018 21:57:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.018 21:57:59 -- common/autotest_common.sh@10 -- # set +x 00:06:18.018 ************************************ 00:06:18.018 START TEST json_config_extra_key 00:06:18.018 ************************************ 00:06:18.018 21:58:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:18.019 21:58:00 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:18.019 21:58:00 -- nvmf/common.sh@7 -- # uname -s 00:06:18.019 21:58:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:18.019 21:58:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:18.019 21:58:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:18.019 21:58:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:18.019 21:58:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:18.019 21:58:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:18.019 21:58:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:18.019 21:58:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:18.019 21:58:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:18.019 21:58:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:18.019 21:58:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:18.019 21:58:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:18.019 21:58:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:18.019 21:58:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:18.019 21:58:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:18.019 21:58:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:18.019 21:58:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:18.019 21:58:00 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.019 21:58:00 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.019 21:58:00 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.019 21:58:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.019 21:58:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.019 21:58:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.019 21:58:00 -- paths/export.sh@5 -- # export PATH 00:06:18.019 21:58:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.019 21:58:00 -- nvmf/common.sh@47 -- # : 0 00:06:18.019 21:58:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:18.019 21:58:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:18.019 21:58:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:18.019 21:58:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:18.019 21:58:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:18.019 21:58:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:18.019 21:58:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:18.019 21:58:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:18.019 21:58:00 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:18.019 21:58:00 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:18.019 21:58:00 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:18.019 21:58:00 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:18.019 21:58:00 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:18.019 21:58:00 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:18.019 21:58:00 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:18.019 21:58:00 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:18.019 21:58:00 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:18.019 21:58:00 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:18.019 21:58:00 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:18.019 INFO: launching applications... 00:06:18.019 21:58:00 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:18.019 21:58:00 -- json_config/common.sh@9 -- # local app=target 00:06:18.019 21:58:00 -- json_config/common.sh@10 -- # shift 00:06:18.019 21:58:00 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:18.019 21:58:00 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:18.019 21:58:00 -- json_config/common.sh@15 -- # local app_extra_params= 00:06:18.019 21:58:00 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:18.019 21:58:00 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:18.019 21:58:00 -- json_config/common.sh@22 -- # app_pid["$app"]=3837762 00:06:18.019 21:58:00 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:18.019 21:58:00 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:18.019 Waiting for target to run... 00:06:18.019 21:58:00 -- json_config/common.sh@25 -- # waitforlisten 3837762 /var/tmp/spdk_tgt.sock 00:06:18.019 21:58:00 -- common/autotest_common.sh@817 -- # '[' -z 3837762 ']' 00:06:18.019 21:58:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:18.019 21:58:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:18.019 21:58:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:18.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:18.019 21:58:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:18.019 21:58:00 -- common/autotest_common.sh@10 -- # set +x 00:06:18.019 [2024-04-24 21:58:00.188932] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:06:18.019 [2024-04-24 21:58:00.189030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3837762 ] 00:06:18.019 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.583 [2024-04-24 21:58:00.745717] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.848 [2024-04-24 21:58:00.851708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.149 21:58:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:19.149 21:58:01 -- common/autotest_common.sh@850 -- # return 0 00:06:19.149 21:58:01 -- json_config/common.sh@26 -- # echo '' 00:06:19.149 00:06:19.149 21:58:01 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:19.149 INFO: shutting down applications... 00:06:19.149 21:58:01 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:19.149 21:58:01 -- json_config/common.sh@31 -- # local app=target 00:06:19.149 21:58:01 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:19.149 21:58:01 -- json_config/common.sh@35 -- # [[ -n 3837762 ]] 00:06:19.149 21:58:01 -- json_config/common.sh@38 -- # kill -SIGINT 3837762 00:06:19.149 21:58:01 -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:19.149 21:58:01 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:19.149 21:58:01 -- json_config/common.sh@41 -- # kill -0 3837762 00:06:19.149 21:58:01 -- json_config/common.sh@45 -- # sleep 0.5 00:06:19.714 21:58:01 -- json_config/common.sh@40 -- # (( i++ )) 00:06:19.714 21:58:01 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:19.714 21:58:01 -- json_config/common.sh@41 -- # kill -0 3837762 00:06:19.714 21:58:01 -- json_config/common.sh@45 -- # sleep 0.5 00:06:19.971 21:58:02 -- json_config/common.sh@40 -- # (( i++ )) 00:06:19.971 21:58:02 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:19.971 21:58:02 -- json_config/common.sh@41 -- # kill -0 3837762 00:06:19.971 21:58:02 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:19.971 21:58:02 -- json_config/common.sh@43 -- # break 00:06:19.971 21:58:02 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:19.972 21:58:02 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:19.972 SPDK target shutdown done 00:06:19.972 21:58:02 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:19.972 Success 00:06:19.972 00:06:19.972 real 0m2.149s 00:06:19.972 user 0m1.549s 00:06:19.972 sys 0m0.657s 00:06:19.972 21:58:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:19.972 21:58:02 -- common/autotest_common.sh@10 -- # set +x 00:06:19.972 ************************************ 00:06:19.972 END TEST json_config_extra_key 00:06:19.972 ************************************ 00:06:20.229 21:58:02 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:20.229 21:58:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:20.229 21:58:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.229 21:58:02 -- common/autotest_common.sh@10 -- # set +x 00:06:20.229 ************************************ 00:06:20.229 START TEST alias_rpc 00:06:20.229 ************************************ 00:06:20.229 21:58:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:20.229 * Looking for test storage... 00:06:20.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:20.229 21:58:02 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:20.229 21:58:02 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3838086 00:06:20.229 21:58:02 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:20.229 21:58:02 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3838086 00:06:20.229 21:58:02 -- common/autotest_common.sh@817 -- # '[' -z 3838086 ']' 00:06:20.229 21:58:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.229 21:58:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:20.229 21:58:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.229 21:58:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:20.229 21:58:02 -- common/autotest_common.sh@10 -- # set +x 00:06:20.486 [2024-04-24 21:58:02.520158] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:06:20.486 [2024-04-24 21:58:02.520328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3838086 ] 00:06:20.486 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.486 [2024-04-24 21:58:02.614048] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.486 [2024-04-24 21:58:02.732529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.431 21:58:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:21.431 21:58:03 -- common/autotest_common.sh@850 -- # return 0 00:06:21.431 21:58:03 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:21.998 21:58:04 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3838086 00:06:21.998 21:58:04 -- common/autotest_common.sh@936 -- # '[' -z 3838086 ']' 00:06:21.998 21:58:04 -- common/autotest_common.sh@940 -- # kill -0 3838086 00:06:21.998 21:58:04 -- common/autotest_common.sh@941 -- # uname 00:06:21.998 21:58:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:21.998 21:58:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3838086 00:06:21.998 21:58:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:21.998 21:58:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:21.998 21:58:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3838086' 00:06:21.998 killing process with pid 3838086 00:06:21.998 21:58:04 -- common/autotest_common.sh@955 -- # kill 3838086 00:06:21.998 21:58:04 -- common/autotest_common.sh@960 -- # wait 3838086 00:06:22.564 00:06:22.564 real 0m2.260s 00:06:22.564 user 0m2.842s 00:06:22.564 sys 0m0.564s 00:06:22.564 21:58:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:22.564 21:58:04 -- common/autotest_common.sh@10 -- # set +x 00:06:22.564 ************************************ 00:06:22.564 END TEST alias_rpc 00:06:22.564 ************************************ 00:06:22.564 21:58:04 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:06:22.564 21:58:04 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:22.564 21:58:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:22.564 21:58:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.564 21:58:04 -- common/autotest_common.sh@10 -- # set +x 00:06:22.564 ************************************ 00:06:22.564 START TEST spdkcli_tcp 00:06:22.564 ************************************ 00:06:22.564 21:58:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:22.823 * Looking for test storage... 00:06:22.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:22.823 21:58:04 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:22.823 21:58:04 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:22.823 21:58:04 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:22.823 21:58:04 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:22.823 21:58:04 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:22.823 21:58:04 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:22.823 21:58:04 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:22.823 21:58:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:22.823 21:58:04 -- common/autotest_common.sh@10 -- # set +x 00:06:22.823 21:58:04 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3838414 00:06:22.823 21:58:04 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:22.823 21:58:04 -- spdkcli/tcp.sh@27 -- # waitforlisten 3838414 00:06:22.823 21:58:04 -- common/autotest_common.sh@817 -- # '[' -z 3838414 ']' 00:06:22.823 21:58:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.823 21:58:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:22.823 21:58:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.823 21:58:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:22.823 21:58:04 -- common/autotest_common.sh@10 -- # set +x 00:06:22.823 [2024-04-24 21:58:04.973898] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:06:22.823 [2024-04-24 21:58:04.974077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3838414 ] 00:06:22.823 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.823 [2024-04-24 21:58:05.056159] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.081 [2024-04-24 21:58:05.179823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.081 [2024-04-24 21:58:05.179827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.340 21:58:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:23.340 21:58:05 -- common/autotest_common.sh@850 -- # return 0 00:06:23.340 21:58:05 -- spdkcli/tcp.sh@31 -- # socat_pid=3838424 00:06:23.340 21:58:05 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:23.340 21:58:05 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:23.906 [ 00:06:23.906 "bdev_malloc_delete", 00:06:23.906 "bdev_malloc_create", 00:06:23.906 "bdev_null_resize", 00:06:23.906 "bdev_null_delete", 00:06:23.906 "bdev_null_create", 00:06:23.906 "bdev_nvme_cuse_unregister", 00:06:23.906 "bdev_nvme_cuse_register", 00:06:23.906 "bdev_opal_new_user", 00:06:23.906 "bdev_opal_set_lock_state", 00:06:23.906 "bdev_opal_delete", 00:06:23.906 "bdev_opal_get_info", 00:06:23.906 "bdev_opal_create", 00:06:23.906 "bdev_nvme_opal_revert", 00:06:23.906 "bdev_nvme_opal_init", 00:06:23.906 "bdev_nvme_send_cmd", 00:06:23.906 "bdev_nvme_get_path_iostat", 00:06:23.906 "bdev_nvme_get_mdns_discovery_info", 00:06:23.906 "bdev_nvme_stop_mdns_discovery", 00:06:23.906 "bdev_nvme_start_mdns_discovery", 00:06:23.906 "bdev_nvme_set_multipath_policy", 00:06:23.906 "bdev_nvme_set_preferred_path", 00:06:23.906 "bdev_nvme_get_io_paths", 00:06:23.906 "bdev_nvme_remove_error_injection", 00:06:23.906 "bdev_nvme_add_error_injection", 00:06:23.906 "bdev_nvme_get_discovery_info", 00:06:23.906 "bdev_nvme_stop_discovery", 00:06:23.906 "bdev_nvme_start_discovery", 00:06:23.906 "bdev_nvme_get_controller_health_info", 00:06:23.906 "bdev_nvme_disable_controller", 00:06:23.906 "bdev_nvme_enable_controller", 00:06:23.906 "bdev_nvme_reset_controller", 00:06:23.906 "bdev_nvme_get_transport_statistics", 00:06:23.906 "bdev_nvme_apply_firmware", 00:06:23.906 "bdev_nvme_detach_controller", 00:06:23.906 "bdev_nvme_get_controllers", 00:06:23.906 "bdev_nvme_attach_controller", 00:06:23.907 "bdev_nvme_set_hotplug", 00:06:23.907 "bdev_nvme_set_options", 00:06:23.907 "bdev_passthru_delete", 00:06:23.907 "bdev_passthru_create", 00:06:23.907 "bdev_lvol_grow_lvstore", 00:06:23.907 "bdev_lvol_get_lvols", 00:06:23.907 "bdev_lvol_get_lvstores", 00:06:23.907 "bdev_lvol_delete", 00:06:23.907 "bdev_lvol_set_read_only", 00:06:23.907 "bdev_lvol_resize", 00:06:23.907 "bdev_lvol_decouple_parent", 00:06:23.907 "bdev_lvol_inflate", 00:06:23.907 "bdev_lvol_rename", 00:06:23.907 "bdev_lvol_clone_bdev", 00:06:23.907 "bdev_lvol_clone", 00:06:23.907 "bdev_lvol_snapshot", 00:06:23.907 "bdev_lvol_create", 00:06:23.907 "bdev_lvol_delete_lvstore", 00:06:23.907 "bdev_lvol_rename_lvstore", 00:06:23.907 "bdev_lvol_create_lvstore", 00:06:23.907 "bdev_raid_set_options", 00:06:23.907 "bdev_raid_remove_base_bdev", 00:06:23.907 "bdev_raid_add_base_bdev", 00:06:23.907 "bdev_raid_delete", 00:06:23.907 "bdev_raid_create", 00:06:23.907 "bdev_raid_get_bdevs", 00:06:23.907 "bdev_error_inject_error", 00:06:23.907 "bdev_error_delete", 00:06:23.907 "bdev_error_create", 00:06:23.907 "bdev_split_delete", 00:06:23.907 "bdev_split_create", 00:06:23.907 "bdev_delay_delete", 00:06:23.907 "bdev_delay_create", 00:06:23.907 "bdev_delay_update_latency", 00:06:23.907 "bdev_zone_block_delete", 00:06:23.907 "bdev_zone_block_create", 00:06:23.907 "blobfs_create", 00:06:23.907 "blobfs_detect", 00:06:23.907 "blobfs_set_cache_size", 00:06:23.907 "bdev_aio_delete", 00:06:23.907 "bdev_aio_rescan", 00:06:23.907 "bdev_aio_create", 00:06:23.907 "bdev_ftl_set_property", 00:06:23.907 "bdev_ftl_get_properties", 00:06:23.907 "bdev_ftl_get_stats", 00:06:23.907 "bdev_ftl_unmap", 00:06:23.907 "bdev_ftl_unload", 00:06:23.907 "bdev_ftl_delete", 00:06:23.907 "bdev_ftl_load", 00:06:23.907 "bdev_ftl_create", 00:06:23.907 "bdev_virtio_attach_controller", 00:06:23.907 "bdev_virtio_scsi_get_devices", 00:06:23.907 "bdev_virtio_detach_controller", 00:06:23.907 "bdev_virtio_blk_set_hotplug", 00:06:23.907 "bdev_iscsi_delete", 00:06:23.907 "bdev_iscsi_create", 00:06:23.907 "bdev_iscsi_set_options", 00:06:23.907 "accel_error_inject_error", 00:06:23.907 "ioat_scan_accel_module", 00:06:23.907 "dsa_scan_accel_module", 00:06:23.907 "iaa_scan_accel_module", 00:06:23.907 "vfu_virtio_create_scsi_endpoint", 00:06:23.907 "vfu_virtio_scsi_remove_target", 00:06:23.907 "vfu_virtio_scsi_add_target", 00:06:23.907 "vfu_virtio_create_blk_endpoint", 00:06:23.907 "vfu_virtio_delete_endpoint", 00:06:23.907 "keyring_file_remove_key", 00:06:23.907 "keyring_file_add_key", 00:06:23.907 "iscsi_set_options", 00:06:23.907 "iscsi_get_auth_groups", 00:06:23.907 "iscsi_auth_group_remove_secret", 00:06:23.907 "iscsi_auth_group_add_secret", 00:06:23.907 "iscsi_delete_auth_group", 00:06:23.907 "iscsi_create_auth_group", 00:06:23.907 "iscsi_set_discovery_auth", 00:06:23.907 "iscsi_get_options", 00:06:23.907 "iscsi_target_node_request_logout", 00:06:23.907 "iscsi_target_node_set_redirect", 00:06:23.907 "iscsi_target_node_set_auth", 00:06:23.907 "iscsi_target_node_add_lun", 00:06:23.907 "iscsi_get_stats", 00:06:23.907 "iscsi_get_connections", 00:06:23.907 "iscsi_portal_group_set_auth", 00:06:23.907 "iscsi_start_portal_group", 00:06:23.907 "iscsi_delete_portal_group", 00:06:23.907 "iscsi_create_portal_group", 00:06:23.907 "iscsi_get_portal_groups", 00:06:23.907 "iscsi_delete_target_node", 00:06:23.907 "iscsi_target_node_remove_pg_ig_maps", 00:06:23.907 "iscsi_target_node_add_pg_ig_maps", 00:06:23.907 "iscsi_create_target_node", 00:06:23.907 "iscsi_get_target_nodes", 00:06:23.907 "iscsi_delete_initiator_group", 00:06:23.907 "iscsi_initiator_group_remove_initiators", 00:06:23.907 "iscsi_initiator_group_add_initiators", 00:06:23.907 "iscsi_create_initiator_group", 00:06:23.907 "iscsi_get_initiator_groups", 00:06:23.907 "nvmf_set_crdt", 00:06:23.907 "nvmf_set_config", 00:06:23.907 "nvmf_set_max_subsystems", 00:06:23.907 "nvmf_subsystem_get_listeners", 00:06:23.907 "nvmf_subsystem_get_qpairs", 00:06:23.907 "nvmf_subsystem_get_controllers", 00:06:23.907 "nvmf_get_stats", 00:06:23.907 "nvmf_get_transports", 00:06:23.907 "nvmf_create_transport", 00:06:23.907 "nvmf_get_targets", 00:06:23.907 "nvmf_delete_target", 00:06:23.907 "nvmf_create_target", 00:06:23.907 "nvmf_subsystem_allow_any_host", 00:06:23.907 "nvmf_subsystem_remove_host", 00:06:23.907 "nvmf_subsystem_add_host", 00:06:23.907 "nvmf_ns_remove_host", 00:06:23.907 "nvmf_ns_add_host", 00:06:23.907 "nvmf_subsystem_remove_ns", 00:06:23.907 "nvmf_subsystem_add_ns", 00:06:23.907 "nvmf_subsystem_listener_set_ana_state", 00:06:23.907 "nvmf_discovery_get_referrals", 00:06:23.907 "nvmf_discovery_remove_referral", 00:06:23.907 "nvmf_discovery_add_referral", 00:06:23.907 "nvmf_subsystem_remove_listener", 00:06:23.907 "nvmf_subsystem_add_listener", 00:06:23.907 "nvmf_delete_subsystem", 00:06:23.907 "nvmf_create_subsystem", 00:06:23.907 "nvmf_get_subsystems", 00:06:23.907 "env_dpdk_get_mem_stats", 00:06:23.907 "nbd_get_disks", 00:06:23.907 "nbd_stop_disk", 00:06:23.907 "nbd_start_disk", 00:06:23.907 "ublk_recover_disk", 00:06:23.907 "ublk_get_disks", 00:06:23.907 "ublk_stop_disk", 00:06:23.907 "ublk_start_disk", 00:06:23.907 "ublk_destroy_target", 00:06:23.907 "ublk_create_target", 00:06:23.907 "virtio_blk_create_transport", 00:06:23.907 "virtio_blk_get_transports", 00:06:23.907 "vhost_controller_set_coalescing", 00:06:23.907 "vhost_get_controllers", 00:06:23.907 "vhost_delete_controller", 00:06:23.907 "vhost_create_blk_controller", 00:06:23.907 "vhost_scsi_controller_remove_target", 00:06:23.907 "vhost_scsi_controller_add_target", 00:06:23.907 "vhost_start_scsi_controller", 00:06:23.907 "vhost_create_scsi_controller", 00:06:23.907 "thread_set_cpumask", 00:06:23.907 "framework_get_scheduler", 00:06:23.907 "framework_set_scheduler", 00:06:23.907 "framework_get_reactors", 00:06:23.907 "thread_get_io_channels", 00:06:23.907 "thread_get_pollers", 00:06:23.907 "thread_get_stats", 00:06:23.907 "framework_monitor_context_switch", 00:06:23.907 "spdk_kill_instance", 00:06:23.907 "log_enable_timestamps", 00:06:23.907 "log_get_flags", 00:06:23.907 "log_clear_flag", 00:06:23.907 "log_set_flag", 00:06:23.907 "log_get_level", 00:06:23.907 "log_set_level", 00:06:23.907 "log_get_print_level", 00:06:23.907 "log_set_print_level", 00:06:23.907 "framework_enable_cpumask_locks", 00:06:23.907 "framework_disable_cpumask_locks", 00:06:23.907 "framework_wait_init", 00:06:23.907 "framework_start_init", 00:06:23.907 "scsi_get_devices", 00:06:23.907 "bdev_get_histogram", 00:06:23.907 "bdev_enable_histogram", 00:06:23.907 "bdev_set_qos_limit", 00:06:23.907 "bdev_set_qd_sampling_period", 00:06:23.907 "bdev_get_bdevs", 00:06:23.907 "bdev_reset_iostat", 00:06:23.907 "bdev_get_iostat", 00:06:23.907 "bdev_examine", 00:06:23.907 "bdev_wait_for_examine", 00:06:23.907 "bdev_set_options", 00:06:23.907 "notify_get_notifications", 00:06:23.907 "notify_get_types", 00:06:23.907 "accel_get_stats", 00:06:23.907 "accel_set_options", 00:06:23.907 "accel_set_driver", 00:06:23.907 "accel_crypto_key_destroy", 00:06:23.907 "accel_crypto_keys_get", 00:06:23.907 "accel_crypto_key_create", 00:06:23.907 "accel_assign_opc", 00:06:23.907 "accel_get_module_info", 00:06:23.907 "accel_get_opc_assignments", 00:06:23.907 "vmd_rescan", 00:06:23.907 "vmd_remove_device", 00:06:23.907 "vmd_enable", 00:06:23.907 "sock_set_default_impl", 00:06:23.907 "sock_impl_set_options", 00:06:23.907 "sock_impl_get_options", 00:06:23.907 "iobuf_get_stats", 00:06:23.907 "iobuf_set_options", 00:06:23.907 "keyring_get_keys", 00:06:23.907 "framework_get_pci_devices", 00:06:23.907 "framework_get_config", 00:06:23.907 "framework_get_subsystems", 00:06:23.907 "vfu_tgt_set_base_path", 00:06:23.907 "trace_get_info", 00:06:23.907 "trace_get_tpoint_group_mask", 00:06:23.907 "trace_disable_tpoint_group", 00:06:23.907 "trace_enable_tpoint_group", 00:06:23.907 "trace_clear_tpoint_mask", 00:06:23.907 "trace_set_tpoint_mask", 00:06:23.907 "spdk_get_version", 00:06:23.907 "rpc_get_methods" 00:06:23.907 ] 00:06:23.907 21:58:05 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:23.907 21:58:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:23.907 21:58:05 -- common/autotest_common.sh@10 -- # set +x 00:06:23.908 21:58:05 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:23.908 21:58:05 -- spdkcli/tcp.sh@38 -- # killprocess 3838414 00:06:23.908 21:58:05 -- common/autotest_common.sh@936 -- # '[' -z 3838414 ']' 00:06:23.908 21:58:05 -- common/autotest_common.sh@940 -- # kill -0 3838414 00:06:23.908 21:58:05 -- common/autotest_common.sh@941 -- # uname 00:06:23.908 21:58:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:23.908 21:58:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3838414 00:06:23.908 21:58:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:23.908 21:58:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:23.908 21:58:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3838414' 00:06:23.908 killing process with pid 3838414 00:06:23.908 21:58:05 -- common/autotest_common.sh@955 -- # kill 3838414 00:06:23.908 21:58:05 -- common/autotest_common.sh@960 -- # wait 3838414 00:06:24.166 00:06:24.166 real 0m1.618s 00:06:24.166 user 0m3.126s 00:06:24.166 sys 0m0.559s 00:06:24.166 21:58:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:24.166 21:58:06 -- common/autotest_common.sh@10 -- # set +x 00:06:24.166 ************************************ 00:06:24.166 END TEST spdkcli_tcp 00:06:24.166 ************************************ 00:06:24.425 21:58:06 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:24.425 21:58:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:24.425 21:58:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.425 21:58:06 -- common/autotest_common.sh@10 -- # set +x 00:06:24.425 ************************************ 00:06:24.425 START TEST dpdk_mem_utility 00:06:24.425 ************************************ 00:06:24.425 21:58:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:24.425 * Looking for test storage... 00:06:24.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:24.425 21:58:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:24.425 21:58:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3838630 00:06:24.425 21:58:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:24.425 21:58:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3838630 00:06:24.425 21:58:06 -- common/autotest_common.sh@817 -- # '[' -z 3838630 ']' 00:06:24.425 21:58:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.425 21:58:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:24.425 21:58:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.425 21:58:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:24.425 21:58:06 -- common/autotest_common.sh@10 -- # set +x 00:06:24.425 [2024-04-24 21:58:06.680116] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:06:24.425 [2024-04-24 21:58:06.680226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3838630 ] 00:06:24.683 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.683 [2024-04-24 21:58:06.750538] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.683 [2024-04-24 21:58:06.870579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.942 21:58:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:24.942 21:58:07 -- common/autotest_common.sh@850 -- # return 0 00:06:24.942 21:58:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:24.942 21:58:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:24.942 21:58:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:24.942 21:58:07 -- common/autotest_common.sh@10 -- # set +x 00:06:24.942 { 00:06:24.942 "filename": "/tmp/spdk_mem_dump.txt" 00:06:24.942 } 00:06:24.942 21:58:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:24.942 21:58:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:24.942 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:24.942 1 heaps totaling size 814.000000 MiB 00:06:24.942 size: 814.000000 MiB heap id: 0 00:06:24.942 end heaps---------- 00:06:24.942 8 mempools totaling size 598.116089 MiB 00:06:24.942 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:24.942 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:24.942 size: 84.521057 MiB name: bdev_io_3838630 00:06:24.942 size: 51.011292 MiB name: evtpool_3838630 00:06:24.942 size: 50.003479 MiB name: msgpool_3838630 00:06:24.942 size: 21.763794 MiB name: PDU_Pool 00:06:24.942 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:24.942 size: 0.026123 MiB name: Session_Pool 00:06:24.942 end mempools------- 00:06:24.942 6 memzones totaling size 4.142822 MiB 00:06:24.942 size: 1.000366 MiB name: RG_ring_0_3838630 00:06:24.942 size: 1.000366 MiB name: RG_ring_1_3838630 00:06:24.942 size: 1.000366 MiB name: RG_ring_4_3838630 00:06:24.942 size: 1.000366 MiB name: RG_ring_5_3838630 00:06:24.942 size: 0.125366 MiB name: RG_ring_2_3838630 00:06:24.942 size: 0.015991 MiB name: RG_ring_3_3838630 00:06:24.942 end memzones------- 00:06:24.942 21:58:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:25.201 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:25.201 list of free elements. size: 12.519348 MiB 00:06:25.201 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:25.201 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:25.201 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:25.201 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:25.201 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:25.201 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:25.201 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:25.201 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:25.201 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:25.201 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:25.201 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:25.201 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:25.201 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:25.201 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:25.201 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:25.201 list of standard malloc elements. size: 199.218079 MiB 00:06:25.201 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:25.201 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:25.201 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:25.201 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:25.201 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:25.201 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:25.201 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:25.201 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:25.201 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:25.201 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:25.201 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:25.201 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:25.201 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:25.201 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:25.201 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:25.201 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:25.201 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:25.201 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:25.201 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:25.201 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:25.201 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:25.201 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:25.201 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:25.201 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:25.201 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:25.201 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:25.201 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:25.201 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:25.201 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:25.201 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:25.201 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:25.201 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:25.201 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:25.201 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:25.202 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:25.202 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:25.202 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:25.202 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:25.202 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:25.202 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:25.202 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:25.202 list of memzone associated elements. size: 602.262573 MiB 00:06:25.202 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:25.202 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:25.202 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:25.202 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:25.202 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:25.202 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3838630_0 00:06:25.202 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:25.202 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3838630_0 00:06:25.202 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:25.202 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3838630_0 00:06:25.202 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:25.202 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:25.202 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:25.202 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:25.202 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:25.202 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3838630 00:06:25.202 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:25.202 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3838630 00:06:25.202 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:25.202 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3838630 00:06:25.202 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:25.202 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:25.202 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:25.202 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:25.202 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:25.202 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:25.202 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:25.202 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:25.202 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:25.202 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3838630 00:06:25.202 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:25.202 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3838630 00:06:25.202 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:25.202 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3838630 00:06:25.202 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:25.202 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3838630 00:06:25.202 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:25.202 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3838630 00:06:25.202 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:25.202 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:25.202 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:25.202 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:25.202 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:25.202 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:25.202 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:25.202 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3838630 00:06:25.202 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:25.202 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:25.202 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:25.202 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:25.202 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:25.202 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3838630 00:06:25.202 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:25.202 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:25.202 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:25.202 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3838630 00:06:25.202 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:25.202 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3838630 00:06:25.202 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:25.202 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:25.202 21:58:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:25.202 21:58:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3838630 00:06:25.202 21:58:07 -- common/autotest_common.sh@936 -- # '[' -z 3838630 ']' 00:06:25.202 21:58:07 -- common/autotest_common.sh@940 -- # kill -0 3838630 00:06:25.202 21:58:07 -- common/autotest_common.sh@941 -- # uname 00:06:25.202 21:58:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:25.202 21:58:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3838630 00:06:25.202 21:58:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:25.202 21:58:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:25.202 21:58:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3838630' 00:06:25.202 killing process with pid 3838630 00:06:25.202 21:58:07 -- common/autotest_common.sh@955 -- # kill 3838630 00:06:25.202 21:58:07 -- common/autotest_common.sh@960 -- # wait 3838630 00:06:25.769 00:06:25.769 real 0m1.217s 00:06:25.769 user 0m1.207s 00:06:25.769 sys 0m0.435s 00:06:25.769 21:58:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:25.769 21:58:07 -- common/autotest_common.sh@10 -- # set +x 00:06:25.769 ************************************ 00:06:25.769 END TEST dpdk_mem_utility 00:06:25.769 ************************************ 00:06:25.769 21:58:07 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:25.769 21:58:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:25.769 21:58:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.769 21:58:07 -- common/autotest_common.sh@10 -- # set +x 00:06:25.769 ************************************ 00:06:25.769 START TEST event 00:06:25.769 ************************************ 00:06:25.769 21:58:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:25.769 * Looking for test storage... 00:06:25.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:25.769 21:58:07 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:25.769 21:58:07 -- bdev/nbd_common.sh@6 -- # set -e 00:06:25.769 21:58:07 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:25.769 21:58:07 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:25.769 21:58:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.769 21:58:07 -- common/autotest_common.sh@10 -- # set +x 00:06:26.027 ************************************ 00:06:26.027 START TEST event_perf 00:06:26.027 ************************************ 00:06:26.027 21:58:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:26.027 Running I/O for 1 seconds...[2024-04-24 21:58:08.078246] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:06:26.027 [2024-04-24 21:58:08.078310] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3838958 ] 00:06:26.027 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.027 [2024-04-24 21:58:08.145669] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:26.027 [2024-04-24 21:58:08.268281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.027 [2024-04-24 21:58:08.268352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.027 [2024-04-24 21:58:08.268414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.027 [2024-04-24 21:58:08.268419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.400 Running I/O for 1 seconds... 00:06:27.400 lcore 0: 204864 00:06:27.400 lcore 1: 204864 00:06:27.400 lcore 2: 204865 00:06:27.400 lcore 3: 204863 00:06:27.400 done. 00:06:27.400 00:06:27.400 real 0m1.331s 00:06:27.400 user 0m4.232s 00:06:27.400 sys 0m0.094s 00:06:27.400 21:58:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:27.400 21:58:09 -- common/autotest_common.sh@10 -- # set +x 00:06:27.400 ************************************ 00:06:27.400 END TEST event_perf 00:06:27.400 ************************************ 00:06:27.400 21:58:09 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:27.400 21:58:09 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:27.400 21:58:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.400 21:58:09 -- common/autotest_common.sh@10 -- # set +x 00:06:27.400 ************************************ 00:06:27.400 START TEST event_reactor 00:06:27.400 ************************************ 00:06:27.400 21:58:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:27.400 [2024-04-24 21:58:09.554534] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:06:27.400 [2024-04-24 21:58:09.554599] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3839125 ] 00:06:27.400 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.400 [2024-04-24 21:58:09.627416] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.658 [2024-04-24 21:58:09.748473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.033 test_start 00:06:29.033 oneshot 00:06:29.033 tick 100 00:06:29.033 tick 100 00:06:29.033 tick 250 00:06:29.033 tick 100 00:06:29.033 tick 100 00:06:29.033 tick 100 00:06:29.033 tick 250 00:06:29.033 tick 500 00:06:29.033 tick 100 00:06:29.033 tick 100 00:06:29.033 tick 250 00:06:29.033 tick 100 00:06:29.033 tick 100 00:06:29.033 test_end 00:06:29.033 00:06:29.033 real 0m1.338s 00:06:29.033 user 0m1.240s 00:06:29.033 sys 0m0.093s 00:06:29.033 21:58:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:29.033 21:58:10 -- common/autotest_common.sh@10 -- # set +x 00:06:29.033 ************************************ 00:06:29.033 END TEST event_reactor 00:06:29.033 ************************************ 00:06:29.033 21:58:10 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:29.033 21:58:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:29.033 21:58:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.033 21:58:10 -- common/autotest_common.sh@10 -- # set +x 00:06:29.033 ************************************ 00:06:29.033 START TEST event_reactor_perf 00:06:29.033 ************************************ 00:06:29.033 21:58:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:29.033 [2024-04-24 21:58:11.060779] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:06:29.033 [2024-04-24 21:58:11.060920] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3839285 ] 00:06:29.033 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.033 [2024-04-24 21:58:11.163449] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.033 [2024-04-24 21:58:11.282907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.406 test_start 00:06:30.406 test_end 00:06:30.406 Performance: 352646 events per second 00:06:30.406 00:06:30.406 real 0m1.376s 00:06:30.406 user 0m1.256s 00:06:30.406 sys 0m0.114s 00:06:30.406 21:58:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:30.406 21:58:12 -- common/autotest_common.sh@10 -- # set +x 00:06:30.406 ************************************ 00:06:30.406 END TEST event_reactor_perf 00:06:30.406 ************************************ 00:06:30.406 21:58:12 -- event/event.sh@49 -- # uname -s 00:06:30.406 21:58:12 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:30.406 21:58:12 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:30.406 21:58:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:30.406 21:58:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.406 21:58:12 -- common/autotest_common.sh@10 -- # set +x 00:06:30.406 ************************************ 00:06:30.406 START TEST event_scheduler 00:06:30.406 ************************************ 00:06:30.406 21:58:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:30.406 * Looking for test storage... 00:06:30.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:30.406 21:58:12 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:30.406 21:58:12 -- scheduler/scheduler.sh@35 -- # scheduler_pid=3839591 00:06:30.406 21:58:12 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:30.406 21:58:12 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:30.406 21:58:12 -- scheduler/scheduler.sh@37 -- # waitforlisten 3839591 00:06:30.406 21:58:12 -- common/autotest_common.sh@817 -- # '[' -z 3839591 ']' 00:06:30.406 21:58:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.406 21:58:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:30.406 21:58:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.406 21:58:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:30.406 21:58:12 -- common/autotest_common.sh@10 -- # set +x 00:06:30.665 [2024-04-24 21:58:12.688592] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:06:30.665 [2024-04-24 21:58:12.688691] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3839591 ] 00:06:30.665 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.665 [2024-04-24 21:58:12.766208] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:30.665 [2024-04-24 21:58:12.893619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.665 [2024-04-24 21:58:12.893674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.665 [2024-04-24 21:58:12.893736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.665 [2024-04-24 21:58:12.893740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.923 21:58:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:30.923 21:58:13 -- common/autotest_common.sh@850 -- # return 0 00:06:30.923 21:58:13 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:30.923 21:58:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.924 21:58:13 -- common/autotest_common.sh@10 -- # set +x 00:06:30.924 POWER: Env isn't set yet! 00:06:30.924 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:30.924 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:06:30.924 POWER: Cannot get available frequencies of lcore 0 00:06:30.924 POWER: Attempting to initialise PSTAT power management... 00:06:30.924 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:30.924 POWER: Initialized successfully for lcore 0 power management 00:06:30.924 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:30.924 POWER: Initialized successfully for lcore 1 power management 00:06:30.924 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:30.924 POWER: Initialized successfully for lcore 2 power management 00:06:30.924 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:30.924 POWER: Initialized successfully for lcore 3 power management 00:06:30.924 21:58:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.924 21:58:13 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:30.924 21:58:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.924 21:58:13 -- common/autotest_common.sh@10 -- # set +x 00:06:31.182 [2024-04-24 21:58:13.207807] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:31.182 21:58:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:31.182 21:58:13 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:31.182 21:58:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:31.182 21:58:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.182 21:58:13 -- common/autotest_common.sh@10 -- # set +x 00:06:31.182 ************************************ 00:06:31.182 START TEST scheduler_create_thread 00:06:31.182 ************************************ 00:06:31.182 21:58:13 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:06:31.182 21:58:13 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:31.182 21:58:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:31.182 21:58:13 -- common/autotest_common.sh@10 -- # set +x 00:06:31.182 2 00:06:31.182 21:58:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:31.182 21:58:13 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:31.182 21:58:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:31.182 21:58:13 -- common/autotest_common.sh@10 -- # set +x 00:06:31.182 3 00:06:31.182 21:58:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:31.182 21:58:13 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:31.182 21:58:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:31.182 21:58:13 -- common/autotest_common.sh@10 -- # set +x 00:06:31.182 4 00:06:31.182 21:58:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:31.182 21:58:13 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:31.182 21:58:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:31.182 21:58:13 -- common/autotest_common.sh@10 -- # set +x 00:06:31.182 5 00:06:31.182 21:58:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:31.182 21:58:13 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:31.182 21:58:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:31.182 21:58:13 -- common/autotest_common.sh@10 -- # set +x 00:06:31.182 6 00:06:31.182 21:58:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:31.182 21:58:13 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:31.182 21:58:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:31.182 21:58:13 -- common/autotest_common.sh@10 -- # set +x 00:06:31.182 7 00:06:31.182 21:58:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:31.182 21:58:13 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:31.182 21:58:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:31.182 21:58:13 -- common/autotest_common.sh@10 -- # set +x 00:06:31.182 8 00:06:31.182 21:58:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:31.182 21:58:13 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:31.182 21:58:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:31.182 21:58:13 -- common/autotest_common.sh@10 -- # set +x 00:06:31.182 9 00:06:31.182 21:58:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:31.182 21:58:13 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:31.182 21:58:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:31.182 21:58:13 -- common/autotest_common.sh@10 -- # set +x 00:06:31.182 10 00:06:31.182 21:58:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:31.182 21:58:13 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:31.182 21:58:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:31.182 21:58:13 -- common/autotest_common.sh@10 -- # set +x 00:06:31.182 21:58:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:31.182 21:58:13 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:31.182 21:58:13 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:31.182 21:58:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:31.182 21:58:13 -- common/autotest_common.sh@10 -- # set +x 00:06:31.747 21:58:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:31.747 21:58:13 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:31.747 21:58:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:31.747 21:58:13 -- common/autotest_common.sh@10 -- # set +x 00:06:33.120 21:58:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:33.120 21:58:15 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:33.120 21:58:15 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:33.120 21:58:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:33.120 21:58:15 -- common/autotest_common.sh@10 -- # set +x 00:06:34.492 21:58:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.492 00:06:34.492 real 0m3.100s 00:06:34.492 user 0m0.010s 00:06:34.492 sys 0m0.004s 00:06:34.492 21:58:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:34.492 21:58:16 -- common/autotest_common.sh@10 -- # set +x 00:06:34.492 ************************************ 00:06:34.492 END TEST scheduler_create_thread 00:06:34.492 ************************************ 00:06:34.492 21:58:16 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:34.492 21:58:16 -- scheduler/scheduler.sh@46 -- # killprocess 3839591 00:06:34.492 21:58:16 -- common/autotest_common.sh@936 -- # '[' -z 3839591 ']' 00:06:34.492 21:58:16 -- common/autotest_common.sh@940 -- # kill -0 3839591 00:06:34.492 21:58:16 -- common/autotest_common.sh@941 -- # uname 00:06:34.492 21:58:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:34.492 21:58:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3839591 00:06:34.492 21:58:16 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:34.492 21:58:16 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:34.492 21:58:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3839591' 00:06:34.492 killing process with pid 3839591 00:06:34.492 21:58:16 -- common/autotest_common.sh@955 -- # kill 3839591 00:06:34.492 21:58:16 -- common/autotest_common.sh@960 -- # wait 3839591 00:06:34.750 [2024-04-24 21:58:16.804218] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:34.750 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:34.750 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:34.750 POWER: Power management governor of lcore 1 has been set to 'userspace' successfully 00:06:34.750 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:34.750 POWER: Power management governor of lcore 2 has been set to 'userspace' successfully 00:06:34.750 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:34.750 POWER: Power management governor of lcore 3 has been set to 'userspace' successfully 00:06:34.750 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:35.008 00:06:35.008 real 0m4.545s 00:06:35.008 user 0m7.870s 00:06:35.008 sys 0m0.435s 00:06:35.008 21:58:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:35.008 21:58:17 -- common/autotest_common.sh@10 -- # set +x 00:06:35.008 ************************************ 00:06:35.008 END TEST event_scheduler 00:06:35.008 ************************************ 00:06:35.008 21:58:17 -- event/event.sh@51 -- # modprobe -n nbd 00:06:35.008 21:58:17 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:35.008 21:58:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:35.008 21:58:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.008 21:58:17 -- common/autotest_common.sh@10 -- # set +x 00:06:35.266 ************************************ 00:06:35.266 START TEST app_repeat 00:06:35.266 ************************************ 00:06:35.266 21:58:17 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:06:35.266 21:58:17 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.266 21:58:17 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.266 21:58:17 -- event/event.sh@13 -- # local nbd_list 00:06:35.266 21:58:17 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.266 21:58:17 -- event/event.sh@14 -- # local bdev_list 00:06:35.266 21:58:17 -- event/event.sh@15 -- # local repeat_times=4 00:06:35.266 21:58:17 -- event/event.sh@17 -- # modprobe nbd 00:06:35.266 21:58:17 -- event/event.sh@19 -- # repeat_pid=3840192 00:06:35.266 21:58:17 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:35.266 21:58:17 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:35.266 21:58:17 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3840192' 00:06:35.266 Process app_repeat pid: 3840192 00:06:35.266 21:58:17 -- event/event.sh@23 -- # for i in {0..2} 00:06:35.266 21:58:17 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:35.266 spdk_app_start Round 0 00:06:35.266 21:58:17 -- event/event.sh@25 -- # waitforlisten 3840192 /var/tmp/spdk-nbd.sock 00:06:35.266 21:58:17 -- common/autotest_common.sh@817 -- # '[' -z 3840192 ']' 00:06:35.266 21:58:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:35.266 21:58:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:35.266 21:58:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:35.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:35.266 21:58:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:35.266 21:58:17 -- common/autotest_common.sh@10 -- # set +x 00:06:35.266 [2024-04-24 21:58:17.296731] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:06:35.266 [2024-04-24 21:58:17.296810] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3840192 ] 00:06:35.266 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.266 [2024-04-24 21:58:17.370831] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.266 [2024-04-24 21:58:17.494293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.266 [2024-04-24 21:58:17.494299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.525 21:58:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:35.525 21:58:17 -- common/autotest_common.sh@850 -- # return 0 00:06:35.525 21:58:17 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.783 Malloc0 00:06:36.041 21:58:18 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.299 Malloc1 00:06:36.299 21:58:18 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.299 21:58:18 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.299 21:58:18 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.299 21:58:18 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:36.299 21:58:18 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.299 21:58:18 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:36.299 21:58:18 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.299 21:58:18 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.299 21:58:18 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.299 21:58:18 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:36.299 21:58:18 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.299 21:58:18 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:36.299 21:58:18 -- bdev/nbd_common.sh@12 -- # local i 00:06:36.299 21:58:18 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:36.299 21:58:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.299 21:58:18 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:36.557 /dev/nbd0 00:06:36.557 21:58:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:36.557 21:58:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:36.557 21:58:18 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:36.557 21:58:18 -- common/autotest_common.sh@855 -- # local i 00:06:36.557 21:58:18 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:36.557 21:58:18 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:36.557 21:58:18 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:36.557 21:58:18 -- common/autotest_common.sh@859 -- # break 00:06:36.557 21:58:18 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:36.557 21:58:18 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:36.557 21:58:18 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.557 1+0 records in 00:06:36.557 1+0 records out 00:06:36.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000165176 s, 24.8 MB/s 00:06:36.557 21:58:18 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:36.557 21:58:18 -- common/autotest_common.sh@872 -- # size=4096 00:06:36.557 21:58:18 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:36.557 21:58:18 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:36.557 21:58:18 -- common/autotest_common.sh@875 -- # return 0 00:06:36.557 21:58:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.557 21:58:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.557 21:58:18 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:37.122 /dev/nbd1 00:06:37.122 21:58:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:37.122 21:58:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:37.122 21:58:19 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:37.122 21:58:19 -- common/autotest_common.sh@855 -- # local i 00:06:37.122 21:58:19 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:37.122 21:58:19 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:37.122 21:58:19 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:37.122 21:58:19 -- common/autotest_common.sh@859 -- # break 00:06:37.122 21:58:19 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:37.122 21:58:19 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:37.122 21:58:19 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.122 1+0 records in 00:06:37.122 1+0 records out 00:06:37.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287508 s, 14.2 MB/s 00:06:37.122 21:58:19 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.122 21:58:19 -- common/autotest_common.sh@872 -- # size=4096 00:06:37.122 21:58:19 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.122 21:58:19 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:37.122 21:58:19 -- common/autotest_common.sh@875 -- # return 0 00:06:37.122 21:58:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.122 21:58:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.122 21:58:19 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.122 21:58:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.122 21:58:19 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.380 21:58:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:37.380 { 00:06:37.380 "nbd_device": "/dev/nbd0", 00:06:37.380 "bdev_name": "Malloc0" 00:06:37.380 }, 00:06:37.380 { 00:06:37.380 "nbd_device": "/dev/nbd1", 00:06:37.380 "bdev_name": "Malloc1" 00:06:37.380 } 00:06:37.380 ]' 00:06:37.380 21:58:19 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:37.380 { 00:06:37.380 "nbd_device": "/dev/nbd0", 00:06:37.380 "bdev_name": "Malloc0" 00:06:37.380 }, 00:06:37.380 { 00:06:37.380 "nbd_device": "/dev/nbd1", 00:06:37.380 "bdev_name": "Malloc1" 00:06:37.380 } 00:06:37.380 ]' 00:06:37.380 21:58:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.380 21:58:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:37.380 /dev/nbd1' 00:06:37.380 21:58:19 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:37.380 /dev/nbd1' 00:06:37.380 21:58:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.380 21:58:19 -- bdev/nbd_common.sh@65 -- # count=2 00:06:37.380 21:58:19 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:37.380 21:58:19 -- bdev/nbd_common.sh@95 -- # count=2 00:06:37.381 21:58:19 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:37.381 21:58:19 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:37.381 21:58:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.381 21:58:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.381 21:58:19 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:37.381 21:58:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.381 21:58:19 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:37.381 21:58:19 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:37.381 256+0 records in 00:06:37.381 256+0 records out 00:06:37.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00847526 s, 124 MB/s 00:06:37.381 21:58:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.381 21:58:19 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:37.639 256+0 records in 00:06:37.639 256+0 records out 00:06:37.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253105 s, 41.4 MB/s 00:06:37.639 21:58:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.639 21:58:19 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:37.639 256+0 records in 00:06:37.639 256+0 records out 00:06:37.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026793 s, 39.1 MB/s 00:06:37.639 21:58:19 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:37.639 21:58:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.639 21:58:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.639 21:58:19 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:37.639 21:58:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.639 21:58:19 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:37.639 21:58:19 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:37.639 21:58:19 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.639 21:58:19 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:37.639 21:58:19 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.639 21:58:19 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:37.639 21:58:19 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.639 21:58:19 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:37.639 21:58:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.639 21:58:19 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.639 21:58:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:37.639 21:58:19 -- bdev/nbd_common.sh@51 -- # local i 00:06:37.639 21:58:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.639 21:58:19 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:37.897 21:58:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:37.897 21:58:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:37.897 21:58:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:37.897 21:58:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.897 21:58:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.897 21:58:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:37.897 21:58:20 -- bdev/nbd_common.sh@41 -- # break 00:06:37.897 21:58:20 -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.897 21:58:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.897 21:58:20 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:38.156 21:58:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:38.156 21:58:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:38.156 21:58:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:38.156 21:58:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.156 21:58:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.156 21:58:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:38.156 21:58:20 -- bdev/nbd_common.sh@41 -- # break 00:06:38.156 21:58:20 -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.156 21:58:20 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.156 21:58:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.156 21:58:20 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.460 21:58:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:38.460 21:58:20 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:38.461 21:58:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.718 21:58:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:38.718 21:58:20 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:38.718 21:58:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.718 21:58:20 -- bdev/nbd_common.sh@65 -- # true 00:06:38.718 21:58:20 -- bdev/nbd_common.sh@65 -- # count=0 00:06:38.718 21:58:20 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:38.718 21:58:20 -- bdev/nbd_common.sh@104 -- # count=0 00:06:38.718 21:58:20 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:38.718 21:58:20 -- bdev/nbd_common.sh@109 -- # return 0 00:06:38.718 21:58:20 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:38.976 21:58:21 -- event/event.sh@35 -- # sleep 3 00:06:39.542 [2024-04-24 21:58:21.499597] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.542 [2024-04-24 21:58:21.617213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.542 [2024-04-24 21:58:21.617213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.542 [2024-04-24 21:58:21.677715] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:39.542 [2024-04-24 21:58:21.677794] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:42.081 21:58:24 -- event/event.sh@23 -- # for i in {0..2} 00:06:42.082 21:58:24 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:42.082 spdk_app_start Round 1 00:06:42.082 21:58:24 -- event/event.sh@25 -- # waitforlisten 3840192 /var/tmp/spdk-nbd.sock 00:06:42.082 21:58:24 -- common/autotest_common.sh@817 -- # '[' -z 3840192 ']' 00:06:42.082 21:58:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:42.082 21:58:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:42.082 21:58:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:42.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:42.082 21:58:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:42.082 21:58:24 -- common/autotest_common.sh@10 -- # set +x 00:06:42.648 21:58:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:42.648 21:58:24 -- common/autotest_common.sh@850 -- # return 0 00:06:42.648 21:58:24 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:42.905 Malloc0 00:06:43.164 21:58:25 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:43.422 Malloc1 00:06:43.422 21:58:25 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:43.422 21:58:25 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.422 21:58:25 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.422 21:58:25 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:43.422 21:58:25 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.422 21:58:25 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:43.422 21:58:25 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:43.422 21:58:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.422 21:58:25 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.422 21:58:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:43.422 21:58:25 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.422 21:58:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:43.422 21:58:25 -- bdev/nbd_common.sh@12 -- # local i 00:06:43.422 21:58:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:43.422 21:58:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.422 21:58:25 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:43.681 /dev/nbd0 00:06:43.681 21:58:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:43.681 21:58:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:43.681 21:58:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:43.681 21:58:25 -- common/autotest_common.sh@855 -- # local i 00:06:43.681 21:58:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:43.681 21:58:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:43.681 21:58:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:43.681 21:58:25 -- common/autotest_common.sh@859 -- # break 00:06:43.681 21:58:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:43.681 21:58:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:43.681 21:58:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:43.681 1+0 records in 00:06:43.681 1+0 records out 00:06:43.681 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000156106 s, 26.2 MB/s 00:06:43.681 21:58:25 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:43.681 21:58:25 -- common/autotest_common.sh@872 -- # size=4096 00:06:43.681 21:58:25 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:43.681 21:58:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:43.681 21:58:25 -- common/autotest_common.sh@875 -- # return 0 00:06:43.681 21:58:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.681 21:58:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.681 21:58:25 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:44.246 /dev/nbd1 00:06:44.246 21:58:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:44.246 21:58:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:44.246 21:58:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:44.246 21:58:26 -- common/autotest_common.sh@855 -- # local i 00:06:44.247 21:58:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:44.247 21:58:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:44.247 21:58:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:44.247 21:58:26 -- common/autotest_common.sh@859 -- # break 00:06:44.247 21:58:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:44.247 21:58:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:44.247 21:58:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.247 1+0 records in 00:06:44.247 1+0 records out 00:06:44.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206269 s, 19.9 MB/s 00:06:44.247 21:58:26 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:44.247 21:58:26 -- common/autotest_common.sh@872 -- # size=4096 00:06:44.247 21:58:26 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:44.247 21:58:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:44.247 21:58:26 -- common/autotest_common.sh@875 -- # return 0 00:06:44.247 21:58:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.247 21:58:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.247 21:58:26 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.247 21:58:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.247 21:58:26 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.504 21:58:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:44.504 { 00:06:44.504 "nbd_device": "/dev/nbd0", 00:06:44.504 "bdev_name": "Malloc0" 00:06:44.504 }, 00:06:44.504 { 00:06:44.504 "nbd_device": "/dev/nbd1", 00:06:44.504 "bdev_name": "Malloc1" 00:06:44.504 } 00:06:44.504 ]' 00:06:44.504 21:58:26 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:44.504 { 00:06:44.504 "nbd_device": "/dev/nbd0", 00:06:44.504 "bdev_name": "Malloc0" 00:06:44.504 }, 00:06:44.504 { 00:06:44.504 "nbd_device": "/dev/nbd1", 00:06:44.504 "bdev_name": "Malloc1" 00:06:44.504 } 00:06:44.504 ]' 00:06:44.504 21:58:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.762 21:58:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:44.762 /dev/nbd1' 00:06:44.762 21:58:26 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:44.762 /dev/nbd1' 00:06:44.762 21:58:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.762 21:58:26 -- bdev/nbd_common.sh@65 -- # count=2 00:06:44.762 21:58:26 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:44.762 21:58:26 -- bdev/nbd_common.sh@95 -- # count=2 00:06:44.762 21:58:26 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:44.762 21:58:26 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:44.762 21:58:26 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.762 21:58:26 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:44.762 21:58:26 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:44.762 21:58:26 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:44.762 21:58:26 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:44.762 21:58:26 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:44.762 256+0 records in 00:06:44.762 256+0 records out 00:06:44.762 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00585712 s, 179 MB/s 00:06:44.762 21:58:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.762 21:58:26 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:44.762 256+0 records in 00:06:44.762 256+0 records out 00:06:44.762 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259023 s, 40.5 MB/s 00:06:44.762 21:58:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.762 21:58:26 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:44.762 256+0 records in 00:06:44.762 256+0 records out 00:06:44.763 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260281 s, 40.3 MB/s 00:06:44.763 21:58:26 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:44.763 21:58:26 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.763 21:58:26 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:44.763 21:58:26 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:44.763 21:58:26 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:44.763 21:58:26 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:44.763 21:58:26 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:44.763 21:58:26 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.763 21:58:26 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:44.763 21:58:26 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.763 21:58:26 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:44.763 21:58:26 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:44.763 21:58:26 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:44.763 21:58:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.763 21:58:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.763 21:58:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:44.763 21:58:26 -- bdev/nbd_common.sh@51 -- # local i 00:06:44.763 21:58:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.763 21:58:26 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:45.021 21:58:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:45.021 21:58:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:45.021 21:58:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:45.021 21:58:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.021 21:58:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.021 21:58:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:45.021 21:58:27 -- bdev/nbd_common.sh@41 -- # break 00:06:45.021 21:58:27 -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.021 21:58:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.021 21:58:27 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:45.586 21:58:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:45.586 21:58:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:45.586 21:58:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:45.586 21:58:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.586 21:58:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.586 21:58:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:45.586 21:58:27 -- bdev/nbd_common.sh@41 -- # break 00:06:45.586 21:58:27 -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.586 21:58:27 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.586 21:58:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.586 21:58:27 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.152 21:58:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:46.152 21:58:28 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:46.152 21:58:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.152 21:58:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:46.152 21:58:28 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:46.152 21:58:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.152 21:58:28 -- bdev/nbd_common.sh@65 -- # true 00:06:46.152 21:58:28 -- bdev/nbd_common.sh@65 -- # count=0 00:06:46.152 21:58:28 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:46.152 21:58:28 -- bdev/nbd_common.sh@104 -- # count=0 00:06:46.152 21:58:28 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:46.152 21:58:28 -- bdev/nbd_common.sh@109 -- # return 0 00:06:46.152 21:58:28 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:46.409 21:58:28 -- event/event.sh@35 -- # sleep 3 00:06:46.667 [2024-04-24 21:58:28.865231] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:46.925 [2024-04-24 21:58:28.983150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.925 [2024-04-24 21:58:28.983156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.925 [2024-04-24 21:58:29.046512] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:46.925 [2024-04-24 21:58:29.046593] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:49.449 21:58:31 -- event/event.sh@23 -- # for i in {0..2} 00:06:49.449 21:58:31 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:49.449 spdk_app_start Round 2 00:06:49.449 21:58:31 -- event/event.sh@25 -- # waitforlisten 3840192 /var/tmp/spdk-nbd.sock 00:06:49.449 21:58:31 -- common/autotest_common.sh@817 -- # '[' -z 3840192 ']' 00:06:49.449 21:58:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:49.449 21:58:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:49.449 21:58:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:49.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:49.449 21:58:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:49.449 21:58:31 -- common/autotest_common.sh@10 -- # set +x 00:06:49.706 21:58:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:49.706 21:58:31 -- common/autotest_common.sh@850 -- # return 0 00:06:49.706 21:58:31 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:50.271 Malloc0 00:06:50.271 21:58:32 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:50.837 Malloc1 00:06:50.837 21:58:32 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:50.837 21:58:32 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.837 21:58:32 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.837 21:58:32 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:50.837 21:58:32 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.837 21:58:32 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:50.837 21:58:32 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:50.837 21:58:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.837 21:58:32 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.837 21:58:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:50.837 21:58:32 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.837 21:58:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:50.837 21:58:32 -- bdev/nbd_common.sh@12 -- # local i 00:06:50.837 21:58:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:50.837 21:58:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.837 21:58:32 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:51.094 /dev/nbd0 00:06:51.094 21:58:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.352 21:58:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.352 21:58:33 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:51.352 21:58:33 -- common/autotest_common.sh@855 -- # local i 00:06:51.352 21:58:33 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:51.352 21:58:33 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:51.352 21:58:33 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:51.352 21:58:33 -- common/autotest_common.sh@859 -- # break 00:06:51.352 21:58:33 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:51.352 21:58:33 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:51.352 21:58:33 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.352 1+0 records in 00:06:51.352 1+0 records out 00:06:51.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237596 s, 17.2 MB/s 00:06:51.352 21:58:33 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.352 21:58:33 -- common/autotest_common.sh@872 -- # size=4096 00:06:51.352 21:58:33 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.352 21:58:33 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:51.352 21:58:33 -- common/autotest_common.sh@875 -- # return 0 00:06:51.352 21:58:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.352 21:58:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.352 21:58:33 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:51.918 /dev/nbd1 00:06:51.918 21:58:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:51.918 21:58:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:51.918 21:58:33 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:51.918 21:58:33 -- common/autotest_common.sh@855 -- # local i 00:06:51.918 21:58:33 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:51.918 21:58:33 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:51.918 21:58:33 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:51.918 21:58:33 -- common/autotest_common.sh@859 -- # break 00:06:51.918 21:58:33 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:51.918 21:58:33 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:51.918 21:58:33 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.918 1+0 records in 00:06:51.918 1+0 records out 00:06:51.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021229 s, 19.3 MB/s 00:06:51.918 21:58:33 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.918 21:58:33 -- common/autotest_common.sh@872 -- # size=4096 00:06:51.918 21:58:33 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.918 21:58:33 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:51.918 21:58:33 -- common/autotest_common.sh@875 -- # return 0 00:06:51.918 21:58:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.918 21:58:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.918 21:58:33 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.918 21:58:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.918 21:58:33 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.175 21:58:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:52.175 { 00:06:52.175 "nbd_device": "/dev/nbd0", 00:06:52.175 "bdev_name": "Malloc0" 00:06:52.175 }, 00:06:52.175 { 00:06:52.175 "nbd_device": "/dev/nbd1", 00:06:52.175 "bdev_name": "Malloc1" 00:06:52.175 } 00:06:52.175 ]' 00:06:52.175 21:58:34 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:52.175 { 00:06:52.175 "nbd_device": "/dev/nbd0", 00:06:52.175 "bdev_name": "Malloc0" 00:06:52.175 }, 00:06:52.176 { 00:06:52.176 "nbd_device": "/dev/nbd1", 00:06:52.176 "bdev_name": "Malloc1" 00:06:52.176 } 00:06:52.176 ]' 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:52.176 /dev/nbd1' 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:52.176 /dev/nbd1' 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@65 -- # count=2 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@95 -- # count=2 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:52.176 256+0 records in 00:06:52.176 256+0 records out 00:06:52.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00512871 s, 204 MB/s 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:52.176 256+0 records in 00:06:52.176 256+0 records out 00:06:52.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244408 s, 42.9 MB/s 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:52.176 256+0 records in 00:06:52.176 256+0 records out 00:06:52.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258958 s, 40.5 MB/s 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.176 21:58:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:52.433 21:58:34 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.433 21:58:34 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:52.433 21:58:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.433 21:58:34 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.433 21:58:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.433 21:58:34 -- bdev/nbd_common.sh@51 -- # local i 00:06:52.433 21:58:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.433 21:58:34 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:52.999 21:58:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.999 21:58:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.999 21:58:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.999 21:58:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.999 21:58:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.999 21:58:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.999 21:58:35 -- bdev/nbd_common.sh@41 -- # break 00:06:52.999 21:58:35 -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.999 21:58:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.999 21:58:35 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:53.257 21:58:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:53.257 21:58:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:53.257 21:58:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:53.257 21:58:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.257 21:58:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.257 21:58:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:53.257 21:58:35 -- bdev/nbd_common.sh@41 -- # break 00:06:53.257 21:58:35 -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.257 21:58:35 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.257 21:58:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.257 21:58:35 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.515 21:58:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:53.515 21:58:35 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:53.515 21:58:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.773 21:58:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:53.773 21:58:35 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:53.773 21:58:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.773 21:58:35 -- bdev/nbd_common.sh@65 -- # true 00:06:53.773 21:58:35 -- bdev/nbd_common.sh@65 -- # count=0 00:06:53.773 21:58:35 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:53.773 21:58:35 -- bdev/nbd_common.sh@104 -- # count=0 00:06:53.773 21:58:35 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:53.773 21:58:35 -- bdev/nbd_common.sh@109 -- # return 0 00:06:53.773 21:58:35 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:54.338 21:58:36 -- event/event.sh@35 -- # sleep 3 00:06:54.596 [2024-04-24 21:58:36.680520] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:54.596 [2024-04-24 21:58:36.797404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.596 [2024-04-24 21:58:36.797413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.854 [2024-04-24 21:58:36.860616] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:54.854 [2024-04-24 21:58:36.860681] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:57.383 21:58:39 -- event/event.sh@38 -- # waitforlisten 3840192 /var/tmp/spdk-nbd.sock 00:06:57.383 21:58:39 -- common/autotest_common.sh@817 -- # '[' -z 3840192 ']' 00:06:57.383 21:58:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:57.383 21:58:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:57.383 21:58:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:57.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:57.383 21:58:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:57.383 21:58:39 -- common/autotest_common.sh@10 -- # set +x 00:06:57.641 21:58:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:57.641 21:58:39 -- common/autotest_common.sh@850 -- # return 0 00:06:57.641 21:58:39 -- event/event.sh@39 -- # killprocess 3840192 00:06:57.641 21:58:39 -- common/autotest_common.sh@936 -- # '[' -z 3840192 ']' 00:06:57.641 21:58:39 -- common/autotest_common.sh@940 -- # kill -0 3840192 00:06:57.641 21:58:39 -- common/autotest_common.sh@941 -- # uname 00:06:57.641 21:58:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:57.641 21:58:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3840192 00:06:57.641 21:58:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:57.641 21:58:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:57.641 21:58:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3840192' 00:06:57.641 killing process with pid 3840192 00:06:57.641 21:58:39 -- common/autotest_common.sh@955 -- # kill 3840192 00:06:57.641 21:58:39 -- common/autotest_common.sh@960 -- # wait 3840192 00:06:57.900 spdk_app_start is called in Round 0. 00:06:57.900 Shutdown signal received, stop current app iteration 00:06:57.900 Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 reinitialization... 00:06:57.900 spdk_app_start is called in Round 1. 00:06:57.900 Shutdown signal received, stop current app iteration 00:06:57.900 Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 reinitialization... 00:06:57.900 spdk_app_start is called in Round 2. 00:06:57.900 Shutdown signal received, stop current app iteration 00:06:57.900 Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 reinitialization... 00:06:57.900 spdk_app_start is called in Round 3. 00:06:57.900 Shutdown signal received, stop current app iteration 00:06:57.900 21:58:40 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:57.900 21:58:40 -- event/event.sh@42 -- # return 0 00:06:57.900 00:06:57.900 real 0m22.755s 00:06:57.900 user 0m52.236s 00:06:57.900 sys 0m4.626s 00:06:57.900 21:58:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:57.900 21:58:40 -- common/autotest_common.sh@10 -- # set +x 00:06:57.900 ************************************ 00:06:57.900 END TEST app_repeat 00:06:57.900 ************************************ 00:06:57.900 21:58:40 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:57.900 21:58:40 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:57.900 21:58:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:57.900 21:58:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.900 21:58:40 -- common/autotest_common.sh@10 -- # set +x 00:06:57.900 ************************************ 00:06:57.900 START TEST cpu_locks 00:06:57.900 ************************************ 00:06:57.900 21:58:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:58.159 * Looking for test storage... 00:06:58.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:58.159 21:58:40 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:58.159 21:58:40 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:58.159 21:58:40 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:58.159 21:58:40 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:58.159 21:58:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:58.159 21:58:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.159 21:58:40 -- common/autotest_common.sh@10 -- # set +x 00:06:58.159 ************************************ 00:06:58.159 START TEST default_locks 00:06:58.159 ************************************ 00:06:58.159 21:58:40 -- common/autotest_common.sh@1111 -- # default_locks 00:06:58.159 21:58:40 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3843084 00:06:58.159 21:58:40 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:58.159 21:58:40 -- event/cpu_locks.sh@47 -- # waitforlisten 3843084 00:06:58.159 21:58:40 -- common/autotest_common.sh@817 -- # '[' -z 3843084 ']' 00:06:58.159 21:58:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.159 21:58:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:58.159 21:58:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.159 21:58:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:58.159 21:58:40 -- common/autotest_common.sh@10 -- # set +x 00:06:58.159 [2024-04-24 21:58:40.384047] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:06:58.159 [2024-04-24 21:58:40.384195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3843084 ] 00:06:58.417 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.417 [2024-04-24 21:58:40.467455] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.417 [2024-04-24 21:58:40.591275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.675 21:58:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:58.675 21:58:40 -- common/autotest_common.sh@850 -- # return 0 00:06:58.675 21:58:40 -- event/cpu_locks.sh@49 -- # locks_exist 3843084 00:06:58.675 21:58:40 -- event/cpu_locks.sh@22 -- # lslocks -p 3843084 00:06:58.675 21:58:40 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:58.959 lslocks: write error 00:06:58.959 21:58:41 -- event/cpu_locks.sh@50 -- # killprocess 3843084 00:06:58.959 21:58:41 -- common/autotest_common.sh@936 -- # '[' -z 3843084 ']' 00:06:58.959 21:58:41 -- common/autotest_common.sh@940 -- # kill -0 3843084 00:06:58.959 21:58:41 -- common/autotest_common.sh@941 -- # uname 00:06:58.959 21:58:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:58.959 21:58:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3843084 00:06:59.228 21:58:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:59.228 21:58:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:59.228 21:58:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3843084' 00:06:59.228 killing process with pid 3843084 00:06:59.228 21:58:41 -- common/autotest_common.sh@955 -- # kill 3843084 00:06:59.228 21:58:41 -- common/autotest_common.sh@960 -- # wait 3843084 00:06:59.487 21:58:41 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3843084 00:06:59.487 21:58:41 -- common/autotest_common.sh@638 -- # local es=0 00:06:59.487 21:58:41 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 3843084 00:06:59.487 21:58:41 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:59.487 21:58:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:59.487 21:58:41 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:59.487 21:58:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:59.487 21:58:41 -- common/autotest_common.sh@641 -- # waitforlisten 3843084 00:06:59.487 21:58:41 -- common/autotest_common.sh@817 -- # '[' -z 3843084 ']' 00:06:59.487 21:58:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.487 21:58:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:59.487 21:58:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.487 21:58:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:59.487 21:58:41 -- common/autotest_common.sh@10 -- # set +x 00:06:59.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (3843084) - No such process 00:06:59.487 ERROR: process (pid: 3843084) is no longer running 00:06:59.487 21:58:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:59.487 21:58:41 -- common/autotest_common.sh@850 -- # return 1 00:06:59.487 21:58:41 -- common/autotest_common.sh@641 -- # es=1 00:06:59.487 21:58:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:59.487 21:58:41 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:59.487 21:58:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:59.487 21:58:41 -- event/cpu_locks.sh@54 -- # no_locks 00:06:59.487 21:58:41 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:59.487 21:58:41 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:59.487 21:58:41 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:59.487 00:06:59.487 real 0m1.389s 00:06:59.487 user 0m1.353s 00:06:59.487 sys 0m0.591s 00:06:59.487 21:58:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:59.487 21:58:41 -- common/autotest_common.sh@10 -- # set +x 00:06:59.487 ************************************ 00:06:59.487 END TEST default_locks 00:06:59.487 ************************************ 00:06:59.487 21:58:41 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:59.487 21:58:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:59.487 21:58:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.487 21:58:41 -- common/autotest_common.sh@10 -- # set +x 00:06:59.745 ************************************ 00:06:59.745 START TEST default_locks_via_rpc 00:06:59.745 ************************************ 00:06:59.745 21:58:41 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:06:59.745 21:58:41 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3843262 00:06:59.745 21:58:41 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:59.745 21:58:41 -- event/cpu_locks.sh@63 -- # waitforlisten 3843262 00:06:59.745 21:58:41 -- common/autotest_common.sh@817 -- # '[' -z 3843262 ']' 00:06:59.745 21:58:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.745 21:58:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:59.745 21:58:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.745 21:58:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:59.746 21:58:41 -- common/autotest_common.sh@10 -- # set +x 00:06:59.746 [2024-04-24 21:58:41.912106] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:06:59.746 [2024-04-24 21:58:41.912212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3843262 ] 00:06:59.746 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.746 [2024-04-24 21:58:41.980114] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.004 [2024-04-24 21:58:42.103326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.262 21:58:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:00.262 21:58:42 -- common/autotest_common.sh@850 -- # return 0 00:07:00.262 21:58:42 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:00.262 21:58:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:00.262 21:58:42 -- common/autotest_common.sh@10 -- # set +x 00:07:00.262 21:58:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:00.262 21:58:42 -- event/cpu_locks.sh@67 -- # no_locks 00:07:00.262 21:58:42 -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:00.262 21:58:42 -- event/cpu_locks.sh@26 -- # local lock_files 00:07:00.262 21:58:42 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:00.262 21:58:42 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:00.262 21:58:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:00.262 21:58:42 -- common/autotest_common.sh@10 -- # set +x 00:07:00.262 21:58:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:00.262 21:58:42 -- event/cpu_locks.sh@71 -- # locks_exist 3843262 00:07:00.262 21:58:42 -- event/cpu_locks.sh@22 -- # lslocks -p 3843262 00:07:00.262 21:58:42 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:00.520 21:58:42 -- event/cpu_locks.sh@73 -- # killprocess 3843262 00:07:00.520 21:58:42 -- common/autotest_common.sh@936 -- # '[' -z 3843262 ']' 00:07:00.520 21:58:42 -- common/autotest_common.sh@940 -- # kill -0 3843262 00:07:00.520 21:58:42 -- common/autotest_common.sh@941 -- # uname 00:07:00.520 21:58:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:00.520 21:58:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3843262 00:07:00.520 21:58:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:00.520 21:58:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:00.520 21:58:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3843262' 00:07:00.520 killing process with pid 3843262 00:07:00.520 21:58:42 -- common/autotest_common.sh@955 -- # kill 3843262 00:07:00.520 21:58:42 -- common/autotest_common.sh@960 -- # wait 3843262 00:07:01.085 00:07:01.085 real 0m1.369s 00:07:01.085 user 0m1.328s 00:07:01.085 sys 0m0.553s 00:07:01.085 21:58:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:01.085 21:58:43 -- common/autotest_common.sh@10 -- # set +x 00:07:01.085 ************************************ 00:07:01.085 END TEST default_locks_via_rpc 00:07:01.085 ************************************ 00:07:01.085 21:58:43 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:01.085 21:58:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:01.085 21:58:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.085 21:58:43 -- common/autotest_common.sh@10 -- # set +x 00:07:01.343 ************************************ 00:07:01.343 START TEST non_locking_app_on_locked_coremask 00:07:01.343 ************************************ 00:07:01.343 21:58:43 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:07:01.343 21:58:43 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3843428 00:07:01.343 21:58:43 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:01.343 21:58:43 -- event/cpu_locks.sh@81 -- # waitforlisten 3843428 /var/tmp/spdk.sock 00:07:01.343 21:58:43 -- common/autotest_common.sh@817 -- # '[' -z 3843428 ']' 00:07:01.343 21:58:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.343 21:58:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:01.343 21:58:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.343 21:58:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:01.343 21:58:43 -- common/autotest_common.sh@10 -- # set +x 00:07:01.343 [2024-04-24 21:58:43.405532] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:01.343 [2024-04-24 21:58:43.405638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3843428 ] 00:07:01.343 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.343 [2024-04-24 21:58:43.475680] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.343 [2024-04-24 21:58:43.598271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.601 21:58:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:01.601 21:58:43 -- common/autotest_common.sh@850 -- # return 0 00:07:01.859 21:58:43 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3843561 00:07:01.859 21:58:43 -- event/cpu_locks.sh@85 -- # waitforlisten 3843561 /var/tmp/spdk2.sock 00:07:01.859 21:58:43 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:01.859 21:58:43 -- common/autotest_common.sh@817 -- # '[' -z 3843561 ']' 00:07:01.859 21:58:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.859 21:58:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:01.859 21:58:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.859 21:58:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:01.859 21:58:43 -- common/autotest_common.sh@10 -- # set +x 00:07:01.859 [2024-04-24 21:58:43.927250] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:01.859 [2024-04-24 21:58:43.927454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3843561 ] 00:07:01.859 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.859 [2024-04-24 21:58:44.067436] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:01.859 [2024-04-24 21:58:44.067483] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.118 [2024-04-24 21:58:44.310281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.053 21:58:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:03.053 21:58:44 -- common/autotest_common.sh@850 -- # return 0 00:07:03.053 21:58:44 -- event/cpu_locks.sh@87 -- # locks_exist 3843428 00:07:03.053 21:58:44 -- event/cpu_locks.sh@22 -- # lslocks -p 3843428 00:07:03.053 21:58:44 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:03.986 lslocks: write error 00:07:03.987 21:58:46 -- event/cpu_locks.sh@89 -- # killprocess 3843428 00:07:03.987 21:58:46 -- common/autotest_common.sh@936 -- # '[' -z 3843428 ']' 00:07:03.987 21:58:46 -- common/autotest_common.sh@940 -- # kill -0 3843428 00:07:03.987 21:58:46 -- common/autotest_common.sh@941 -- # uname 00:07:03.987 21:58:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:03.987 21:58:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3843428 00:07:04.244 21:58:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:04.244 21:58:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:04.244 21:58:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3843428' 00:07:04.244 killing process with pid 3843428 00:07:04.244 21:58:46 -- common/autotest_common.sh@955 -- # kill 3843428 00:07:04.245 21:58:46 -- common/autotest_common.sh@960 -- # wait 3843428 00:07:05.179 21:58:47 -- event/cpu_locks.sh@90 -- # killprocess 3843561 00:07:05.179 21:58:47 -- common/autotest_common.sh@936 -- # '[' -z 3843561 ']' 00:07:05.179 21:58:47 -- common/autotest_common.sh@940 -- # kill -0 3843561 00:07:05.179 21:58:47 -- common/autotest_common.sh@941 -- # uname 00:07:05.179 21:58:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:05.179 21:58:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3843561 00:07:05.179 21:58:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:05.179 21:58:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:05.179 21:58:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3843561' 00:07:05.179 killing process with pid 3843561 00:07:05.179 21:58:47 -- common/autotest_common.sh@955 -- # kill 3843561 00:07:05.179 21:58:47 -- common/autotest_common.sh@960 -- # wait 3843561 00:07:05.746 00:07:05.746 real 0m4.412s 00:07:05.746 user 0m4.779s 00:07:05.746 sys 0m1.517s 00:07:05.746 21:58:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:05.746 21:58:47 -- common/autotest_common.sh@10 -- # set +x 00:07:05.746 ************************************ 00:07:05.746 END TEST non_locking_app_on_locked_coremask 00:07:05.746 ************************************ 00:07:05.746 21:58:47 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:05.746 21:58:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:05.746 21:58:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.746 21:58:47 -- common/autotest_common.sh@10 -- # set +x 00:07:05.746 ************************************ 00:07:05.746 START TEST locking_app_on_unlocked_coremask 00:07:05.746 ************************************ 00:07:05.746 21:58:47 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:07:05.746 21:58:47 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3844010 00:07:05.746 21:58:47 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:05.746 21:58:47 -- event/cpu_locks.sh@99 -- # waitforlisten 3844010 /var/tmp/spdk.sock 00:07:05.746 21:58:47 -- common/autotest_common.sh@817 -- # '[' -z 3844010 ']' 00:07:05.746 21:58:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.746 21:58:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:05.746 21:58:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.746 21:58:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:05.746 21:58:47 -- common/autotest_common.sh@10 -- # set +x 00:07:05.746 [2024-04-24 21:58:47.968085] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:05.746 [2024-04-24 21:58:47.968181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3844010 ] 00:07:06.005 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.005 [2024-04-24 21:58:48.040498] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:06.005 [2024-04-24 21:58:48.040541] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.005 [2024-04-24 21:58:48.163549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.264 21:58:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:06.264 21:58:48 -- common/autotest_common.sh@850 -- # return 0 00:07:06.264 21:58:48 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3844133 00:07:06.264 21:58:48 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:06.264 21:58:48 -- event/cpu_locks.sh@103 -- # waitforlisten 3844133 /var/tmp/spdk2.sock 00:07:06.264 21:58:48 -- common/autotest_common.sh@817 -- # '[' -z 3844133 ']' 00:07:06.264 21:58:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.264 21:58:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:06.264 21:58:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.264 21:58:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:06.264 21:58:48 -- common/autotest_common.sh@10 -- # set +x 00:07:06.264 [2024-04-24 21:58:48.494982] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:06.264 [2024-04-24 21:58:48.495089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3844133 ] 00:07:06.521 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.521 [2024-04-24 21:58:48.599180] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.780 [2024-04-24 21:58:48.846592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.346 21:58:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:07.346 21:58:49 -- common/autotest_common.sh@850 -- # return 0 00:07:07.346 21:58:49 -- event/cpu_locks.sh@105 -- # locks_exist 3844133 00:07:07.346 21:58:49 -- event/cpu_locks.sh@22 -- # lslocks -p 3844133 00:07:07.346 21:58:49 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:07.911 lslocks: write error 00:07:07.911 21:58:50 -- event/cpu_locks.sh@107 -- # killprocess 3844010 00:07:07.911 21:58:50 -- common/autotest_common.sh@936 -- # '[' -z 3844010 ']' 00:07:07.911 21:58:50 -- common/autotest_common.sh@940 -- # kill -0 3844010 00:07:07.911 21:58:50 -- common/autotest_common.sh@941 -- # uname 00:07:07.911 21:58:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:07.911 21:58:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3844010 00:07:07.911 21:58:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:07.911 21:58:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:07.911 21:58:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3844010' 00:07:07.911 killing process with pid 3844010 00:07:07.911 21:58:50 -- common/autotest_common.sh@955 -- # kill 3844010 00:07:07.911 21:58:50 -- common/autotest_common.sh@960 -- # wait 3844010 00:07:08.846 21:58:51 -- event/cpu_locks.sh@108 -- # killprocess 3844133 00:07:08.846 21:58:51 -- common/autotest_common.sh@936 -- # '[' -z 3844133 ']' 00:07:08.846 21:58:51 -- common/autotest_common.sh@940 -- # kill -0 3844133 00:07:08.846 21:58:51 -- common/autotest_common.sh@941 -- # uname 00:07:08.846 21:58:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:08.846 21:58:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3844133 00:07:09.104 21:58:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:09.104 21:58:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:09.104 21:58:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3844133' 00:07:09.104 killing process with pid 3844133 00:07:09.104 21:58:51 -- common/autotest_common.sh@955 -- # kill 3844133 00:07:09.104 21:58:51 -- common/autotest_common.sh@960 -- # wait 3844133 00:07:09.362 00:07:09.362 real 0m3.681s 00:07:09.362 user 0m3.939s 00:07:09.362 sys 0m1.166s 00:07:09.363 21:58:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:09.363 21:58:51 -- common/autotest_common.sh@10 -- # set +x 00:07:09.363 ************************************ 00:07:09.363 END TEST locking_app_on_unlocked_coremask 00:07:09.363 ************************************ 00:07:09.363 21:58:51 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:09.363 21:58:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:09.363 21:58:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.363 21:58:51 -- common/autotest_common.sh@10 -- # set +x 00:07:09.621 ************************************ 00:07:09.621 START TEST locking_app_on_locked_coremask 00:07:09.621 ************************************ 00:07:09.621 21:58:51 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:07:09.621 21:58:51 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3844576 00:07:09.621 21:58:51 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:09.621 21:58:51 -- event/cpu_locks.sh@116 -- # waitforlisten 3844576 /var/tmp/spdk.sock 00:07:09.621 21:58:51 -- common/autotest_common.sh@817 -- # '[' -z 3844576 ']' 00:07:09.621 21:58:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.621 21:58:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:09.621 21:58:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.621 21:58:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:09.621 21:58:51 -- common/autotest_common.sh@10 -- # set +x 00:07:09.621 [2024-04-24 21:58:51.824106] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:09.621 [2024-04-24 21:58:51.824286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3844576 ] 00:07:09.900 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.900 [2024-04-24 21:58:51.913859] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.900 [2024-04-24 21:58:52.037710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.158 21:58:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:10.158 21:58:52 -- common/autotest_common.sh@850 -- # return 0 00:07:10.158 21:58:52 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3844579 00:07:10.158 21:58:52 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:10.158 21:58:52 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3844579 /var/tmp/spdk2.sock 00:07:10.158 21:58:52 -- common/autotest_common.sh@638 -- # local es=0 00:07:10.158 21:58:52 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 3844579 /var/tmp/spdk2.sock 00:07:10.158 21:58:52 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:07:10.158 21:58:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:10.158 21:58:52 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:07:10.158 21:58:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:10.158 21:58:52 -- common/autotest_common.sh@641 -- # waitforlisten 3844579 /var/tmp/spdk2.sock 00:07:10.158 21:58:52 -- common/autotest_common.sh@817 -- # '[' -z 3844579 ']' 00:07:10.158 21:58:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:10.158 21:58:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:10.158 21:58:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:10.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:10.158 21:58:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:10.158 21:58:52 -- common/autotest_common.sh@10 -- # set +x 00:07:10.158 [2024-04-24 21:58:52.360379] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:10.158 [2024-04-24 21:58:52.360491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3844579 ] 00:07:10.158 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.415 [2024-04-24 21:58:52.457045] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3844576 has claimed it. 00:07:10.415 [2024-04-24 21:58:52.457095] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:11.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (3844579) - No such process 00:07:11.346 ERROR: process (pid: 3844579) is no longer running 00:07:11.346 21:58:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:11.346 21:58:53 -- common/autotest_common.sh@850 -- # return 1 00:07:11.346 21:58:53 -- common/autotest_common.sh@641 -- # es=1 00:07:11.346 21:58:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:11.346 21:58:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:11.346 21:58:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:11.346 21:58:53 -- event/cpu_locks.sh@122 -- # locks_exist 3844576 00:07:11.346 21:58:53 -- event/cpu_locks.sh@22 -- # lslocks -p 3844576 00:07:11.346 21:58:53 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:11.909 lslocks: write error 00:07:11.909 21:58:53 -- event/cpu_locks.sh@124 -- # killprocess 3844576 00:07:11.909 21:58:53 -- common/autotest_common.sh@936 -- # '[' -z 3844576 ']' 00:07:11.909 21:58:53 -- common/autotest_common.sh@940 -- # kill -0 3844576 00:07:11.909 21:58:53 -- common/autotest_common.sh@941 -- # uname 00:07:11.909 21:58:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:11.909 21:58:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3844576 00:07:11.909 21:58:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:11.909 21:58:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:11.909 21:58:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3844576' 00:07:11.909 killing process with pid 3844576 00:07:11.909 21:58:53 -- common/autotest_common.sh@955 -- # kill 3844576 00:07:11.909 21:58:53 -- common/autotest_common.sh@960 -- # wait 3844576 00:07:12.166 00:07:12.166 real 0m2.669s 00:07:12.166 user 0m3.114s 00:07:12.166 sys 0m0.887s 00:07:12.166 21:58:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:12.166 21:58:54 -- common/autotest_common.sh@10 -- # set +x 00:07:12.166 ************************************ 00:07:12.166 END TEST locking_app_on_locked_coremask 00:07:12.166 ************************************ 00:07:12.423 21:58:54 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:12.423 21:58:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:12.423 21:58:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.423 21:58:54 -- common/autotest_common.sh@10 -- # set +x 00:07:12.423 ************************************ 00:07:12.423 START TEST locking_overlapped_coremask 00:07:12.423 ************************************ 00:07:12.423 21:58:54 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:07:12.423 21:58:54 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3844880 00:07:12.423 21:58:54 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:12.423 21:58:54 -- event/cpu_locks.sh@133 -- # waitforlisten 3844880 /var/tmp/spdk.sock 00:07:12.423 21:58:54 -- common/autotest_common.sh@817 -- # '[' -z 3844880 ']' 00:07:12.423 21:58:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.423 21:58:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:12.423 21:58:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.423 21:58:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:12.423 21:58:54 -- common/autotest_common.sh@10 -- # set +x 00:07:12.423 [2024-04-24 21:58:54.615841] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:12.423 [2024-04-24 21:58:54.615943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3844880 ] 00:07:12.423 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.682 [2024-04-24 21:58:54.691011] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:12.682 [2024-04-24 21:58:54.810999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.682 [2024-04-24 21:58:54.811052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.682 [2024-04-24 21:58:54.811055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.939 21:58:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:12.939 21:58:55 -- common/autotest_common.sh@850 -- # return 0 00:07:12.939 21:58:55 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3845008 00:07:12.939 21:58:55 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:12.939 21:58:55 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3845008 /var/tmp/spdk2.sock 00:07:12.939 21:58:55 -- common/autotest_common.sh@638 -- # local es=0 00:07:12.939 21:58:55 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 3845008 /var/tmp/spdk2.sock 00:07:12.939 21:58:55 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:07:12.939 21:58:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:12.939 21:58:55 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:07:12.939 21:58:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:12.939 21:58:55 -- common/autotest_common.sh@641 -- # waitforlisten 3845008 /var/tmp/spdk2.sock 00:07:12.939 21:58:55 -- common/autotest_common.sh@817 -- # '[' -z 3845008 ']' 00:07:12.939 21:58:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:12.940 21:58:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:12.940 21:58:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:12.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:12.940 21:58:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:12.940 21:58:55 -- common/autotest_common.sh@10 -- # set +x 00:07:12.940 [2024-04-24 21:58:55.174728] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:12.940 [2024-04-24 21:58:55.174864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3845008 ] 00:07:13.197 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.197 [2024-04-24 21:58:55.302687] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3844880 has claimed it. 00:07:13.197 [2024-04-24 21:58:55.302744] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:13.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (3845008) - No such process 00:07:13.762 ERROR: process (pid: 3845008) is no longer running 00:07:13.762 21:58:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:13.762 21:58:55 -- common/autotest_common.sh@850 -- # return 1 00:07:13.762 21:58:55 -- common/autotest_common.sh@641 -- # es=1 00:07:13.762 21:58:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:13.762 21:58:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:13.762 21:58:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:13.762 21:58:55 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:13.762 21:58:55 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:13.762 21:58:55 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:13.762 21:58:55 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:13.763 21:58:55 -- event/cpu_locks.sh@141 -- # killprocess 3844880 00:07:13.763 21:58:55 -- common/autotest_common.sh@936 -- # '[' -z 3844880 ']' 00:07:13.763 21:58:55 -- common/autotest_common.sh@940 -- # kill -0 3844880 00:07:13.763 21:58:55 -- common/autotest_common.sh@941 -- # uname 00:07:13.763 21:58:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:13.763 21:58:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3844880 00:07:13.763 21:58:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:13.763 21:58:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:13.763 21:58:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3844880' 00:07:13.763 killing process with pid 3844880 00:07:13.763 21:58:55 -- common/autotest_common.sh@955 -- # kill 3844880 00:07:13.763 21:58:55 -- common/autotest_common.sh@960 -- # wait 3844880 00:07:14.328 00:07:14.328 real 0m1.899s 00:07:14.328 user 0m5.166s 00:07:14.328 sys 0m0.545s 00:07:14.328 21:58:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:14.328 21:58:56 -- common/autotest_common.sh@10 -- # set +x 00:07:14.328 ************************************ 00:07:14.328 END TEST locking_overlapped_coremask 00:07:14.328 ************************************ 00:07:14.328 21:58:56 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:14.328 21:58:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:14.328 21:58:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.328 21:58:56 -- common/autotest_common.sh@10 -- # set +x 00:07:14.586 ************************************ 00:07:14.586 START TEST locking_overlapped_coremask_via_rpc 00:07:14.586 ************************************ 00:07:14.586 21:58:56 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:07:14.586 21:58:56 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3845186 00:07:14.586 21:58:56 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:14.586 21:58:56 -- event/cpu_locks.sh@149 -- # waitforlisten 3845186 /var/tmp/spdk.sock 00:07:14.586 21:58:56 -- common/autotest_common.sh@817 -- # '[' -z 3845186 ']' 00:07:14.586 21:58:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.586 21:58:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:14.586 21:58:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.586 21:58:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:14.586 21:58:56 -- common/autotest_common.sh@10 -- # set +x 00:07:14.586 [2024-04-24 21:58:56.660186] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:14.586 [2024-04-24 21:58:56.660288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3845186 ] 00:07:14.586 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.586 [2024-04-24 21:58:56.735588] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:14.586 [2024-04-24 21:58:56.735623] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:14.844 [2024-04-24 21:58:56.856585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.844 [2024-04-24 21:58:56.856637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.844 [2024-04-24 21:58:56.856640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.102 21:58:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:15.102 21:58:57 -- common/autotest_common.sh@850 -- # return 0 00:07:15.102 21:58:57 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3845197 00:07:15.102 21:58:57 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:15.102 21:58:57 -- event/cpu_locks.sh@153 -- # waitforlisten 3845197 /var/tmp/spdk2.sock 00:07:15.102 21:58:57 -- common/autotest_common.sh@817 -- # '[' -z 3845197 ']' 00:07:15.102 21:58:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.102 21:58:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:15.102 21:58:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.102 21:58:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:15.102 21:58:57 -- common/autotest_common.sh@10 -- # set +x 00:07:15.102 [2024-04-24 21:58:57.188744] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:15.102 [2024-04-24 21:58:57.188843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3845197 ] 00:07:15.102 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.102 [2024-04-24 21:58:57.293184] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:15.102 [2024-04-24 21:58:57.293226] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.360 [2024-04-24 21:58:57.535234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.360 [2024-04-24 21:58:57.538459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:15.360 [2024-04-24 21:58:57.538462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.293 21:58:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:16.293 21:58:58 -- common/autotest_common.sh@850 -- # return 0 00:07:16.293 21:58:58 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:16.293 21:58:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:16.293 21:58:58 -- common/autotest_common.sh@10 -- # set +x 00:07:16.293 21:58:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:16.293 21:58:58 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:16.293 21:58:58 -- common/autotest_common.sh@638 -- # local es=0 00:07:16.293 21:58:58 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:16.293 21:58:58 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:07:16.293 21:58:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:16.293 21:58:58 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:07:16.293 21:58:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:16.293 21:58:58 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:16.293 21:58:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:16.293 21:58:58 -- common/autotest_common.sh@10 -- # set +x 00:07:16.293 [2024-04-24 21:58:58.279504] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3845186 has claimed it. 00:07:16.293 request: 00:07:16.293 { 00:07:16.293 "method": "framework_enable_cpumask_locks", 00:07:16.293 "req_id": 1 00:07:16.293 } 00:07:16.293 Got JSON-RPC error response 00:07:16.293 response: 00:07:16.293 { 00:07:16.293 "code": -32603, 00:07:16.293 "message": "Failed to claim CPU core: 2" 00:07:16.293 } 00:07:16.293 21:58:58 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:07:16.293 21:58:58 -- common/autotest_common.sh@641 -- # es=1 00:07:16.293 21:58:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:16.293 21:58:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:16.293 21:58:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:16.293 21:58:58 -- event/cpu_locks.sh@158 -- # waitforlisten 3845186 /var/tmp/spdk.sock 00:07:16.293 21:58:58 -- common/autotest_common.sh@817 -- # '[' -z 3845186 ']' 00:07:16.293 21:58:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.293 21:58:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:16.293 21:58:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.293 21:58:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:16.293 21:58:58 -- common/autotest_common.sh@10 -- # set +x 00:07:16.551 21:58:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:16.551 21:58:58 -- common/autotest_common.sh@850 -- # return 0 00:07:16.551 21:58:58 -- event/cpu_locks.sh@159 -- # waitforlisten 3845197 /var/tmp/spdk2.sock 00:07:16.551 21:58:58 -- common/autotest_common.sh@817 -- # '[' -z 3845197 ']' 00:07:16.551 21:58:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:16.551 21:58:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:16.551 21:58:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:16.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:16.551 21:58:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:16.551 21:58:58 -- common/autotest_common.sh@10 -- # set +x 00:07:16.808 21:58:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:16.808 21:58:59 -- common/autotest_common.sh@850 -- # return 0 00:07:16.808 21:58:59 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:16.808 21:58:59 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:16.809 21:58:59 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:16.809 21:58:59 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:16.809 00:07:16.809 real 0m2.416s 00:07:16.809 user 0m1.386s 00:07:16.809 sys 0m0.250s 00:07:16.809 21:58:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:16.809 21:58:59 -- common/autotest_common.sh@10 -- # set +x 00:07:16.809 ************************************ 00:07:16.809 END TEST locking_overlapped_coremask_via_rpc 00:07:16.809 ************************************ 00:07:16.809 21:58:59 -- event/cpu_locks.sh@174 -- # cleanup 00:07:16.809 21:58:59 -- event/cpu_locks.sh@15 -- # [[ -z 3845186 ]] 00:07:16.809 21:58:59 -- event/cpu_locks.sh@15 -- # killprocess 3845186 00:07:16.809 21:58:59 -- common/autotest_common.sh@936 -- # '[' -z 3845186 ']' 00:07:16.809 21:58:59 -- common/autotest_common.sh@940 -- # kill -0 3845186 00:07:16.809 21:58:59 -- common/autotest_common.sh@941 -- # uname 00:07:16.809 21:58:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:16.809 21:58:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3845186 00:07:17.066 21:58:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:17.066 21:58:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:17.066 21:58:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3845186' 00:07:17.066 killing process with pid 3845186 00:07:17.066 21:58:59 -- common/autotest_common.sh@955 -- # kill 3845186 00:07:17.066 21:58:59 -- common/autotest_common.sh@960 -- # wait 3845186 00:07:17.367 21:58:59 -- event/cpu_locks.sh@16 -- # [[ -z 3845197 ]] 00:07:17.367 21:58:59 -- event/cpu_locks.sh@16 -- # killprocess 3845197 00:07:17.367 21:58:59 -- common/autotest_common.sh@936 -- # '[' -z 3845197 ']' 00:07:17.367 21:58:59 -- common/autotest_common.sh@940 -- # kill -0 3845197 00:07:17.367 21:58:59 -- common/autotest_common.sh@941 -- # uname 00:07:17.367 21:58:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:17.367 21:58:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3845197 00:07:17.626 21:58:59 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:07:17.626 21:58:59 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:07:17.626 21:58:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3845197' 00:07:17.626 killing process with pid 3845197 00:07:17.626 21:58:59 -- common/autotest_common.sh@955 -- # kill 3845197 00:07:17.626 21:58:59 -- common/autotest_common.sh@960 -- # wait 3845197 00:07:17.885 21:59:00 -- event/cpu_locks.sh@18 -- # rm -f 00:07:17.885 21:59:00 -- event/cpu_locks.sh@1 -- # cleanup 00:07:17.885 21:59:00 -- event/cpu_locks.sh@15 -- # [[ -z 3845186 ]] 00:07:17.885 21:59:00 -- event/cpu_locks.sh@15 -- # killprocess 3845186 00:07:17.885 21:59:00 -- common/autotest_common.sh@936 -- # '[' -z 3845186 ']' 00:07:17.885 21:59:00 -- common/autotest_common.sh@940 -- # kill -0 3845186 00:07:17.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3845186) - No such process 00:07:17.885 21:59:00 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3845186 is not found' 00:07:17.885 Process with pid 3845186 is not found 00:07:17.885 21:59:00 -- event/cpu_locks.sh@16 -- # [[ -z 3845197 ]] 00:07:17.885 21:59:00 -- event/cpu_locks.sh@16 -- # killprocess 3845197 00:07:17.885 21:59:00 -- common/autotest_common.sh@936 -- # '[' -z 3845197 ']' 00:07:17.885 21:59:00 -- common/autotest_common.sh@940 -- # kill -0 3845197 00:07:17.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3845197) - No such process 00:07:17.885 21:59:00 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3845197 is not found' 00:07:17.885 Process with pid 3845197 is not found 00:07:17.885 21:59:00 -- event/cpu_locks.sh@18 -- # rm -f 00:07:17.885 00:07:17.885 real 0m19.960s 00:07:17.885 user 0m34.461s 00:07:17.885 sys 0m6.793s 00:07:17.885 21:59:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:17.885 21:59:00 -- common/autotest_common.sh@10 -- # set +x 00:07:17.885 ************************************ 00:07:17.885 END TEST cpu_locks 00:07:17.885 ************************************ 00:07:17.885 00:07:17.885 real 0m52.223s 00:07:17.885 user 1m41.635s 00:07:17.885 sys 0m12.676s 00:07:17.885 21:59:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:17.885 21:59:00 -- common/autotest_common.sh@10 -- # set +x 00:07:17.885 ************************************ 00:07:17.885 END TEST event 00:07:17.885 ************************************ 00:07:18.143 21:59:00 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:18.143 21:59:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:18.143 21:59:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.143 21:59:00 -- common/autotest_common.sh@10 -- # set +x 00:07:18.143 ************************************ 00:07:18.143 START TEST thread 00:07:18.143 ************************************ 00:07:18.143 21:59:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:18.143 * Looking for test storage... 00:07:18.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:18.143 21:59:00 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:18.143 21:59:00 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:18.143 21:59:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.143 21:59:00 -- common/autotest_common.sh@10 -- # set +x 00:07:18.401 ************************************ 00:07:18.401 START TEST thread_poller_perf 00:07:18.401 ************************************ 00:07:18.401 21:59:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:18.401 [2024-04-24 21:59:00.452556] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:18.401 [2024-04-24 21:59:00.452621] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3845709 ] 00:07:18.401 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.401 [2024-04-24 21:59:00.524140] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.401 [2024-04-24 21:59:00.645224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.401 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:19.772 ====================================== 00:07:19.772 busy:2711129180 (cyc) 00:07:19.772 total_run_count: 291000 00:07:19.772 tsc_hz: 2700000000 (cyc) 00:07:19.772 ====================================== 00:07:19.772 poller_cost: 9316 (cyc), 3450 (nsec) 00:07:19.772 00:07:19.772 real 0m1.345s 00:07:19.772 user 0m1.243s 00:07:19.772 sys 0m0.095s 00:07:19.772 21:59:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:19.772 21:59:01 -- common/autotest_common.sh@10 -- # set +x 00:07:19.772 ************************************ 00:07:19.772 END TEST thread_poller_perf 00:07:19.772 ************************************ 00:07:19.772 21:59:01 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:19.772 21:59:01 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:19.772 21:59:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.773 21:59:01 -- common/autotest_common.sh@10 -- # set +x 00:07:19.773 ************************************ 00:07:19.773 START TEST thread_poller_perf 00:07:19.773 ************************************ 00:07:19.773 21:59:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:19.773 [2024-04-24 21:59:01.949894] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:19.773 [2024-04-24 21:59:01.950032] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3845990 ] 00:07:19.773 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.031 [2024-04-24 21:59:02.048978] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.031 [2024-04-24 21:59:02.169757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.031 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:21.404 ====================================== 00:07:21.404 busy:2702555956 (cyc) 00:07:21.404 total_run_count: 3831000 00:07:21.404 tsc_hz: 2700000000 (cyc) 00:07:21.404 ====================================== 00:07:21.404 poller_cost: 705 (cyc), 261 (nsec) 00:07:21.404 00:07:21.404 real 0m1.371s 00:07:21.404 user 0m1.252s 00:07:21.404 sys 0m0.112s 00:07:21.404 21:59:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:21.404 21:59:03 -- common/autotest_common.sh@10 -- # set +x 00:07:21.404 ************************************ 00:07:21.404 END TEST thread_poller_perf 00:07:21.404 ************************************ 00:07:21.404 21:59:03 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:21.404 00:07:21.404 real 0m3.060s 00:07:21.404 user 0m2.626s 00:07:21.404 sys 0m0.406s 00:07:21.404 21:59:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:21.404 21:59:03 -- common/autotest_common.sh@10 -- # set +x 00:07:21.404 ************************************ 00:07:21.404 END TEST thread 00:07:21.404 ************************************ 00:07:21.404 21:59:03 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:21.404 21:59:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:21.404 21:59:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:21.404 21:59:03 -- common/autotest_common.sh@10 -- # set +x 00:07:21.404 ************************************ 00:07:21.404 START TEST accel 00:07:21.404 ************************************ 00:07:21.404 21:59:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:21.404 * Looking for test storage... 00:07:21.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:21.404 21:59:03 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:21.404 21:59:03 -- accel/accel.sh@82 -- # get_expected_opcs 00:07:21.404 21:59:03 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:21.404 21:59:03 -- accel/accel.sh@62 -- # spdk_tgt_pid=3846193 00:07:21.404 21:59:03 -- accel/accel.sh@63 -- # waitforlisten 3846193 00:07:21.404 21:59:03 -- common/autotest_common.sh@817 -- # '[' -z 3846193 ']' 00:07:21.404 21:59:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.404 21:59:03 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:21.404 21:59:03 -- accel/accel.sh@61 -- # build_accel_config 00:07:21.404 21:59:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:21.404 21:59:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.404 21:59:03 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.404 21:59:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:21.404 21:59:03 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.404 21:59:03 -- common/autotest_common.sh@10 -- # set +x 00:07:21.404 21:59:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.404 21:59:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.404 21:59:03 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.404 21:59:03 -- accel/accel.sh@40 -- # local IFS=, 00:07:21.404 21:59:03 -- accel/accel.sh@41 -- # jq -r . 00:07:21.404 [2024-04-24 21:59:03.606575] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:21.404 [2024-04-24 21:59:03.606715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3846193 ] 00:07:21.662 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.662 [2024-04-24 21:59:03.713341] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.662 [2024-04-24 21:59:03.835285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.598 21:59:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:22.598 21:59:04 -- common/autotest_common.sh@850 -- # return 0 00:07:22.598 21:59:04 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:22.598 21:59:04 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:22.598 21:59:04 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:22.598 21:59:04 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:22.598 21:59:04 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:22.598 21:59:04 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:22.598 21:59:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:22.598 21:59:04 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:22.598 21:59:04 -- common/autotest_common.sh@10 -- # set +x 00:07:22.598 21:59:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:22.598 21:59:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:22.598 21:59:04 -- accel/accel.sh@72 -- # IFS== 00:07:22.598 21:59:04 -- accel/accel.sh@72 -- # read -r opc module 00:07:22.598 21:59:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:22.598 21:59:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:22.598 21:59:04 -- accel/accel.sh@72 -- # IFS== 00:07:22.598 21:59:04 -- accel/accel.sh@72 -- # read -r opc module 00:07:22.598 21:59:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:22.598 21:59:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:22.598 21:59:04 -- accel/accel.sh@72 -- # IFS== 00:07:22.598 21:59:04 -- accel/accel.sh@72 -- # read -r opc module 00:07:22.598 21:59:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:22.598 21:59:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:22.598 21:59:04 -- accel/accel.sh@72 -- # IFS== 00:07:22.598 21:59:04 -- accel/accel.sh@72 -- # read -r opc module 00:07:22.598 21:59:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:22.598 21:59:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:22.598 21:59:04 -- accel/accel.sh@72 -- # IFS== 00:07:22.598 21:59:04 -- accel/accel.sh@72 -- # read -r opc module 00:07:22.598 21:59:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:22.598 21:59:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:22.598 21:59:04 -- accel/accel.sh@72 -- # IFS== 00:07:22.598 21:59:04 -- accel/accel.sh@72 -- # read -r opc module 00:07:22.598 21:59:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:22.598 21:59:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:22.598 21:59:04 -- accel/accel.sh@72 -- # IFS== 00:07:22.598 21:59:04 -- accel/accel.sh@72 -- # read -r opc module 00:07:22.598 21:59:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:22.598 21:59:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:22.598 21:59:04 -- accel/accel.sh@72 -- # IFS== 00:07:22.598 21:59:04 -- accel/accel.sh@72 -- # read -r opc module 00:07:22.598 21:59:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:22.598 21:59:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:22.598 21:59:04 -- accel/accel.sh@72 -- # IFS== 00:07:22.598 21:59:04 -- accel/accel.sh@72 -- # read -r opc module 00:07:22.598 21:59:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:22.598 21:59:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:22.598 21:59:04 -- accel/accel.sh@72 -- # IFS== 00:07:22.598 21:59:04 -- accel/accel.sh@72 -- # read -r opc module 00:07:22.598 21:59:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:22.598 21:59:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:22.598 21:59:04 -- accel/accel.sh@72 -- # IFS== 00:07:22.598 21:59:04 -- accel/accel.sh@72 -- # read -r opc module 00:07:22.598 21:59:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:22.598 21:59:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:22.598 21:59:04 -- accel/accel.sh@72 -- # IFS== 00:07:22.598 21:59:04 -- accel/accel.sh@72 -- # read -r opc module 00:07:22.598 21:59:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:22.598 21:59:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:22.598 21:59:04 -- accel/accel.sh@72 -- # IFS== 00:07:22.598 21:59:04 -- accel/accel.sh@72 -- # read -r opc module 00:07:22.598 21:59:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:22.598 21:59:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:22.598 21:59:04 -- accel/accel.sh@72 -- # IFS== 00:07:22.598 21:59:04 -- accel/accel.sh@72 -- # read -r opc module 00:07:22.598 21:59:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:22.598 21:59:04 -- accel/accel.sh@75 -- # killprocess 3846193 00:07:22.598 21:59:04 -- common/autotest_common.sh@936 -- # '[' -z 3846193 ']' 00:07:22.598 21:59:04 -- common/autotest_common.sh@940 -- # kill -0 3846193 00:07:22.598 21:59:04 -- common/autotest_common.sh@941 -- # uname 00:07:22.598 21:59:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:22.598 21:59:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3846193 00:07:22.598 21:59:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:22.598 21:59:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:22.598 21:59:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3846193' 00:07:22.598 killing process with pid 3846193 00:07:22.598 21:59:04 -- common/autotest_common.sh@955 -- # kill 3846193 00:07:22.598 21:59:04 -- common/autotest_common.sh@960 -- # wait 3846193 00:07:23.164 21:59:05 -- accel/accel.sh@76 -- # trap - ERR 00:07:23.164 21:59:05 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:23.164 21:59:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:23.164 21:59:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.164 21:59:05 -- common/autotest_common.sh@10 -- # set +x 00:07:23.164 21:59:05 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:07:23.164 21:59:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:23.164 21:59:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.164 21:59:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.164 21:59:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.164 21:59:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.164 21:59:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.164 21:59:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.164 21:59:05 -- accel/accel.sh@40 -- # local IFS=, 00:07:23.164 21:59:05 -- accel/accel.sh@41 -- # jq -r . 00:07:23.164 21:59:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:23.164 21:59:05 -- common/autotest_common.sh@10 -- # set +x 00:07:23.423 21:59:05 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:23.423 21:59:05 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:23.423 21:59:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.423 21:59:05 -- common/autotest_common.sh@10 -- # set +x 00:07:23.423 ************************************ 00:07:23.423 START TEST accel_missing_filename 00:07:23.423 ************************************ 00:07:23.423 21:59:05 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:07:23.423 21:59:05 -- common/autotest_common.sh@638 -- # local es=0 00:07:23.423 21:59:05 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:23.423 21:59:05 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:23.423 21:59:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:23.423 21:59:05 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:23.423 21:59:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:23.423 21:59:05 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:07:23.423 21:59:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:23.423 21:59:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.423 21:59:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.423 21:59:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.423 21:59:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.423 21:59:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.423 21:59:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.423 21:59:05 -- accel/accel.sh@40 -- # local IFS=, 00:07:23.423 21:59:05 -- accel/accel.sh@41 -- # jq -r . 00:07:23.423 [2024-04-24 21:59:05.569579] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:23.423 [2024-04-24 21:59:05.569644] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3846503 ] 00:07:23.423 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.423 [2024-04-24 21:59:05.663767] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.681 [2024-04-24 21:59:05.785879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.681 [2024-04-24 21:59:05.849049] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:23.939 [2024-04-24 21:59:05.939519] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:07:23.939 A filename is required. 00:07:23.939 21:59:06 -- common/autotest_common.sh@641 -- # es=234 00:07:23.939 21:59:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:23.939 21:59:06 -- common/autotest_common.sh@650 -- # es=106 00:07:23.939 21:59:06 -- common/autotest_common.sh@651 -- # case "$es" in 00:07:23.939 21:59:06 -- common/autotest_common.sh@658 -- # es=1 00:07:23.939 21:59:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:23.939 00:07:23.939 real 0m0.529s 00:07:23.939 user 0m0.423s 00:07:23.939 sys 0m0.169s 00:07:23.939 21:59:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:23.939 21:59:06 -- common/autotest_common.sh@10 -- # set +x 00:07:23.939 ************************************ 00:07:23.939 END TEST accel_missing_filename 00:07:23.939 ************************************ 00:07:23.939 21:59:06 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:23.939 21:59:06 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:23.939 21:59:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.939 21:59:06 -- common/autotest_common.sh@10 -- # set +x 00:07:24.198 ************************************ 00:07:24.198 START TEST accel_compress_verify 00:07:24.198 ************************************ 00:07:24.198 21:59:06 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:24.198 21:59:06 -- common/autotest_common.sh@638 -- # local es=0 00:07:24.198 21:59:06 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:24.198 21:59:06 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:24.198 21:59:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:24.198 21:59:06 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:24.198 21:59:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:24.198 21:59:06 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:24.198 21:59:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:24.198 21:59:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.198 21:59:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.198 21:59:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.198 21:59:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.198 21:59:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.198 21:59:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.198 21:59:06 -- accel/accel.sh@40 -- # local IFS=, 00:07:24.198 21:59:06 -- accel/accel.sh@41 -- # jq -r . 00:07:24.198 [2024-04-24 21:59:06.234412] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:24.198 [2024-04-24 21:59:06.234476] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3846541 ] 00:07:24.198 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.198 [2024-04-24 21:59:06.302214] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.198 [2024-04-24 21:59:06.424664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.456 [2024-04-24 21:59:06.487496] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:24.456 [2024-04-24 21:59:06.577139] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:07:24.456 00:07:24.456 Compression does not support the verify option, aborting. 00:07:24.456 21:59:06 -- common/autotest_common.sh@641 -- # es=161 00:07:24.456 21:59:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:24.456 21:59:06 -- common/autotest_common.sh@650 -- # es=33 00:07:24.456 21:59:06 -- common/autotest_common.sh@651 -- # case "$es" in 00:07:24.456 21:59:06 -- common/autotest_common.sh@658 -- # es=1 00:07:24.456 21:59:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:24.456 00:07:24.456 real 0m0.490s 00:07:24.456 user 0m0.372s 00:07:24.456 sys 0m0.153s 00:07:24.456 21:59:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:24.456 21:59:06 -- common/autotest_common.sh@10 -- # set +x 00:07:24.456 ************************************ 00:07:24.456 END TEST accel_compress_verify 00:07:24.456 ************************************ 00:07:24.715 21:59:06 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:24.715 21:59:06 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:24.715 21:59:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:24.715 21:59:06 -- common/autotest_common.sh@10 -- # set +x 00:07:24.715 ************************************ 00:07:24.715 START TEST accel_wrong_workload 00:07:24.715 ************************************ 00:07:24.715 21:59:06 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:07:24.715 21:59:06 -- common/autotest_common.sh@638 -- # local es=0 00:07:24.715 21:59:06 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:24.715 21:59:06 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:24.715 21:59:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:24.715 21:59:06 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:24.715 21:59:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:24.715 21:59:06 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:07:24.715 21:59:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:24.715 21:59:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.715 21:59:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.715 21:59:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.715 21:59:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.715 21:59:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.715 21:59:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.715 21:59:06 -- accel/accel.sh@40 -- # local IFS=, 00:07:24.715 21:59:06 -- accel/accel.sh@41 -- # jq -r . 00:07:24.715 Unsupported workload type: foobar 00:07:24.715 [2024-04-24 21:59:06.861119] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:24.715 accel_perf options: 00:07:24.715 [-h help message] 00:07:24.715 [-q queue depth per core] 00:07:24.715 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:24.715 [-T number of threads per core 00:07:24.715 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:24.715 [-t time in seconds] 00:07:24.715 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:24.715 [ dif_verify, , dif_generate, dif_generate_copy 00:07:24.715 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:24.715 [-l for compress/decompress workloads, name of uncompressed input file 00:07:24.715 [-S for crc32c workload, use this seed value (default 0) 00:07:24.715 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:24.715 [-f for fill workload, use this BYTE value (default 255) 00:07:24.715 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:24.715 [-y verify result if this switch is on] 00:07:24.715 [-a tasks to allocate per core (default: same value as -q)] 00:07:24.715 Can be used to spread operations across a wider range of memory. 00:07:24.715 21:59:06 -- common/autotest_common.sh@641 -- # es=1 00:07:24.715 21:59:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:24.715 21:59:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:24.715 21:59:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:24.715 00:07:24.715 real 0m0.022s 00:07:24.715 user 0m0.011s 00:07:24.715 sys 0m0.011s 00:07:24.715 21:59:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:24.715 21:59:06 -- common/autotest_common.sh@10 -- # set +x 00:07:24.715 ************************************ 00:07:24.715 END TEST accel_wrong_workload 00:07:24.715 ************************************ 00:07:24.715 Error: writing output failed: Broken pipe 00:07:24.715 21:59:06 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:24.715 21:59:06 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:24.715 21:59:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:24.715 21:59:06 -- common/autotest_common.sh@10 -- # set +x 00:07:24.975 ************************************ 00:07:24.975 START TEST accel_negative_buffers 00:07:24.975 ************************************ 00:07:24.975 21:59:07 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:24.975 21:59:07 -- common/autotest_common.sh@638 -- # local es=0 00:07:24.975 21:59:07 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:24.975 21:59:07 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:24.975 21:59:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:24.975 21:59:07 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:24.975 21:59:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:24.975 21:59:07 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:07:24.975 21:59:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:24.975 21:59:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.975 21:59:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.975 21:59:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.975 21:59:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.975 21:59:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.975 21:59:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.975 21:59:07 -- accel/accel.sh@40 -- # local IFS=, 00:07:24.975 21:59:07 -- accel/accel.sh@41 -- # jq -r . 00:07:24.975 -x option must be non-negative. 00:07:24.975 [2024-04-24 21:59:07.023450] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:24.975 accel_perf options: 00:07:24.975 [-h help message] 00:07:24.975 [-q queue depth per core] 00:07:24.975 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:24.975 [-T number of threads per core 00:07:24.975 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:24.975 [-t time in seconds] 00:07:24.975 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:24.975 [ dif_verify, , dif_generate, dif_generate_copy 00:07:24.975 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:24.975 [-l for compress/decompress workloads, name of uncompressed input file 00:07:24.975 [-S for crc32c workload, use this seed value (default 0) 00:07:24.975 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:24.975 [-f for fill workload, use this BYTE value (default 255) 00:07:24.975 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:24.975 [-y verify result if this switch is on] 00:07:24.975 [-a tasks to allocate per core (default: same value as -q)] 00:07:24.975 Can be used to spread operations across a wider range of memory. 00:07:24.975 21:59:07 -- common/autotest_common.sh@641 -- # es=1 00:07:24.975 21:59:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:24.975 21:59:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:24.975 21:59:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:24.975 00:07:24.975 real 0m0.026s 00:07:24.975 user 0m0.016s 00:07:24.975 sys 0m0.010s 00:07:24.975 21:59:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:24.975 21:59:07 -- common/autotest_common.sh@10 -- # set +x 00:07:24.975 ************************************ 00:07:24.975 END TEST accel_negative_buffers 00:07:24.975 ************************************ 00:07:24.975 21:59:07 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:24.975 21:59:07 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:24.975 21:59:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:24.975 21:59:07 -- common/autotest_common.sh@10 -- # set +x 00:07:24.975 Error: writing output failed: Broken pipe 00:07:24.975 ************************************ 00:07:24.975 START TEST accel_crc32c 00:07:24.975 ************************************ 00:07:24.975 21:59:07 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:24.975 21:59:07 -- accel/accel.sh@16 -- # local accel_opc 00:07:24.975 21:59:07 -- accel/accel.sh@17 -- # local accel_module 00:07:24.975 21:59:07 -- accel/accel.sh@19 -- # IFS=: 00:07:24.975 21:59:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:24.975 21:59:07 -- accel/accel.sh@19 -- # read -r var val 00:07:24.975 21:59:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:24.975 21:59:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.975 21:59:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.975 21:59:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.975 21:59:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.975 21:59:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.975 21:59:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.975 21:59:07 -- accel/accel.sh@40 -- # local IFS=, 00:07:24.975 21:59:07 -- accel/accel.sh@41 -- # jq -r . 00:07:24.975 [2024-04-24 21:59:07.189659] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:24.975 [2024-04-24 21:59:07.189739] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3846752 ] 00:07:25.233 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.233 [2024-04-24 21:59:07.274130] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.233 [2024-04-24 21:59:07.395074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.233 21:59:07 -- accel/accel.sh@20 -- # val= 00:07:25.233 21:59:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # IFS=: 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # read -r var val 00:07:25.233 21:59:07 -- accel/accel.sh@20 -- # val= 00:07:25.233 21:59:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # IFS=: 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # read -r var val 00:07:25.233 21:59:07 -- accel/accel.sh@20 -- # val=0x1 00:07:25.233 21:59:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # IFS=: 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # read -r var val 00:07:25.233 21:59:07 -- accel/accel.sh@20 -- # val= 00:07:25.233 21:59:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # IFS=: 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # read -r var val 00:07:25.233 21:59:07 -- accel/accel.sh@20 -- # val= 00:07:25.233 21:59:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # IFS=: 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # read -r var val 00:07:25.233 21:59:07 -- accel/accel.sh@20 -- # val=crc32c 00:07:25.233 21:59:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.233 21:59:07 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # IFS=: 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # read -r var val 00:07:25.233 21:59:07 -- accel/accel.sh@20 -- # val=32 00:07:25.233 21:59:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # IFS=: 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # read -r var val 00:07:25.233 21:59:07 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.233 21:59:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # IFS=: 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # read -r var val 00:07:25.233 21:59:07 -- accel/accel.sh@20 -- # val= 00:07:25.233 21:59:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # IFS=: 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # read -r var val 00:07:25.233 21:59:07 -- accel/accel.sh@20 -- # val=software 00:07:25.233 21:59:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.233 21:59:07 -- accel/accel.sh@22 -- # accel_module=software 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # IFS=: 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # read -r var val 00:07:25.233 21:59:07 -- accel/accel.sh@20 -- # val=32 00:07:25.233 21:59:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # IFS=: 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # read -r var val 00:07:25.233 21:59:07 -- accel/accel.sh@20 -- # val=32 00:07:25.233 21:59:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # IFS=: 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # read -r var val 00:07:25.233 21:59:07 -- accel/accel.sh@20 -- # val=1 00:07:25.233 21:59:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # IFS=: 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # read -r var val 00:07:25.233 21:59:07 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.233 21:59:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # IFS=: 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # read -r var val 00:07:25.233 21:59:07 -- accel/accel.sh@20 -- # val=Yes 00:07:25.233 21:59:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # IFS=: 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # read -r var val 00:07:25.233 21:59:07 -- accel/accel.sh@20 -- # val= 00:07:25.233 21:59:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # IFS=: 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # read -r var val 00:07:25.233 21:59:07 -- accel/accel.sh@20 -- # val= 00:07:25.233 21:59:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # IFS=: 00:07:25.233 21:59:07 -- accel/accel.sh@19 -- # read -r var val 00:07:26.607 21:59:08 -- accel/accel.sh@20 -- # val= 00:07:26.607 21:59:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.607 21:59:08 -- accel/accel.sh@19 -- # IFS=: 00:07:26.607 21:59:08 -- accel/accel.sh@19 -- # read -r var val 00:07:26.607 21:59:08 -- accel/accel.sh@20 -- # val= 00:07:26.607 21:59:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.607 21:59:08 -- accel/accel.sh@19 -- # IFS=: 00:07:26.607 21:59:08 -- accel/accel.sh@19 -- # read -r var val 00:07:26.607 21:59:08 -- accel/accel.sh@20 -- # val= 00:07:26.607 21:59:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.607 21:59:08 -- accel/accel.sh@19 -- # IFS=: 00:07:26.607 21:59:08 -- accel/accel.sh@19 -- # read -r var val 00:07:26.607 21:59:08 -- accel/accel.sh@20 -- # val= 00:07:26.607 21:59:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.607 21:59:08 -- accel/accel.sh@19 -- # IFS=: 00:07:26.607 21:59:08 -- accel/accel.sh@19 -- # read -r var val 00:07:26.607 21:59:08 -- accel/accel.sh@20 -- # val= 00:07:26.607 21:59:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.607 21:59:08 -- accel/accel.sh@19 -- # IFS=: 00:07:26.607 21:59:08 -- accel/accel.sh@19 -- # read -r var val 00:07:26.607 21:59:08 -- accel/accel.sh@20 -- # val= 00:07:26.607 21:59:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.607 21:59:08 -- accel/accel.sh@19 -- # IFS=: 00:07:26.607 21:59:08 -- accel/accel.sh@19 -- # read -r var val 00:07:26.607 21:59:08 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.607 21:59:08 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:26.607 21:59:08 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.607 00:07:26.607 real 0m1.508s 00:07:26.607 user 0m1.345s 00:07:26.607 sys 0m0.165s 00:07:26.607 21:59:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:26.607 21:59:08 -- common/autotest_common.sh@10 -- # set +x 00:07:26.607 ************************************ 00:07:26.607 END TEST accel_crc32c 00:07:26.607 ************************************ 00:07:26.607 21:59:08 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:26.607 21:59:08 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:26.607 21:59:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:26.607 21:59:08 -- common/autotest_common.sh@10 -- # set +x 00:07:26.607 ************************************ 00:07:26.607 START TEST accel_crc32c_C2 00:07:26.607 ************************************ 00:07:26.607 21:59:08 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:26.607 21:59:08 -- accel/accel.sh@16 -- # local accel_opc 00:07:26.607 21:59:08 -- accel/accel.sh@17 -- # local accel_module 00:07:26.607 21:59:08 -- accel/accel.sh@19 -- # IFS=: 00:07:26.607 21:59:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:26.607 21:59:08 -- accel/accel.sh@19 -- # read -r var val 00:07:26.607 21:59:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:26.607 21:59:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.607 21:59:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.607 21:59:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.607 21:59:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.607 21:59:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.607 21:59:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.607 21:59:08 -- accel/accel.sh@40 -- # local IFS=, 00:07:26.607 21:59:08 -- accel/accel.sh@41 -- # jq -r . 00:07:26.607 [2024-04-24 21:59:08.822210] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:26.607 [2024-04-24 21:59:08.822273] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3847032 ] 00:07:26.607 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.866 [2024-04-24 21:59:08.889747] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.866 [2024-04-24 21:59:09.010401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.866 21:59:09 -- accel/accel.sh@20 -- # val= 00:07:26.866 21:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # read -r var val 00:07:26.866 21:59:09 -- accel/accel.sh@20 -- # val= 00:07:26.866 21:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # read -r var val 00:07:26.866 21:59:09 -- accel/accel.sh@20 -- # val=0x1 00:07:26.866 21:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # read -r var val 00:07:26.866 21:59:09 -- accel/accel.sh@20 -- # val= 00:07:26.866 21:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # read -r var val 00:07:26.866 21:59:09 -- accel/accel.sh@20 -- # val= 00:07:26.866 21:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # read -r var val 00:07:26.866 21:59:09 -- accel/accel.sh@20 -- # val=crc32c 00:07:26.866 21:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 21:59:09 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # read -r var val 00:07:26.866 21:59:09 -- accel/accel.sh@20 -- # val=0 00:07:26.866 21:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # read -r var val 00:07:26.866 21:59:09 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.866 21:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # read -r var val 00:07:26.866 21:59:09 -- accel/accel.sh@20 -- # val= 00:07:26.866 21:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # read -r var val 00:07:26.866 21:59:09 -- accel/accel.sh@20 -- # val=software 00:07:26.866 21:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 21:59:09 -- accel/accel.sh@22 -- # accel_module=software 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # read -r var val 00:07:26.866 21:59:09 -- accel/accel.sh@20 -- # val=32 00:07:26.866 21:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # read -r var val 00:07:26.866 21:59:09 -- accel/accel.sh@20 -- # val=32 00:07:26.866 21:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # read -r var val 00:07:26.866 21:59:09 -- accel/accel.sh@20 -- # val=1 00:07:26.866 21:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # read -r var val 00:07:26.866 21:59:09 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.866 21:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # read -r var val 00:07:26.866 21:59:09 -- accel/accel.sh@20 -- # val=Yes 00:07:26.866 21:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # read -r var val 00:07:26.866 21:59:09 -- accel/accel.sh@20 -- # val= 00:07:26.866 21:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # read -r var val 00:07:26.866 21:59:09 -- accel/accel.sh@20 -- # val= 00:07:26.866 21:59:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 21:59:09 -- accel/accel.sh@19 -- # read -r var val 00:07:28.240 21:59:10 -- accel/accel.sh@20 -- # val= 00:07:28.240 21:59:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.240 21:59:10 -- accel/accel.sh@19 -- # IFS=: 00:07:28.240 21:59:10 -- accel/accel.sh@19 -- # read -r var val 00:07:28.240 21:59:10 -- accel/accel.sh@20 -- # val= 00:07:28.240 21:59:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.240 21:59:10 -- accel/accel.sh@19 -- # IFS=: 00:07:28.240 21:59:10 -- accel/accel.sh@19 -- # read -r var val 00:07:28.240 21:59:10 -- accel/accel.sh@20 -- # val= 00:07:28.240 21:59:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.240 21:59:10 -- accel/accel.sh@19 -- # IFS=: 00:07:28.240 21:59:10 -- accel/accel.sh@19 -- # read -r var val 00:07:28.240 21:59:10 -- accel/accel.sh@20 -- # val= 00:07:28.240 21:59:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.240 21:59:10 -- accel/accel.sh@19 -- # IFS=: 00:07:28.240 21:59:10 -- accel/accel.sh@19 -- # read -r var val 00:07:28.240 21:59:10 -- accel/accel.sh@20 -- # val= 00:07:28.240 21:59:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.241 21:59:10 -- accel/accel.sh@19 -- # IFS=: 00:07:28.241 21:59:10 -- accel/accel.sh@19 -- # read -r var val 00:07:28.241 21:59:10 -- accel/accel.sh@20 -- # val= 00:07:28.241 21:59:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.241 21:59:10 -- accel/accel.sh@19 -- # IFS=: 00:07:28.241 21:59:10 -- accel/accel.sh@19 -- # read -r var val 00:07:28.241 21:59:10 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.241 21:59:10 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:28.241 21:59:10 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.241 00:07:28.241 real 0m1.495s 00:07:28.241 user 0m1.342s 00:07:28.241 sys 0m0.154s 00:07:28.241 21:59:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:28.241 21:59:10 -- common/autotest_common.sh@10 -- # set +x 00:07:28.241 ************************************ 00:07:28.241 END TEST accel_crc32c_C2 00:07:28.241 ************************************ 00:07:28.241 21:59:10 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:28.241 21:59:10 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:28.241 21:59:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.241 21:59:10 -- common/autotest_common.sh@10 -- # set +x 00:07:28.241 ************************************ 00:07:28.241 START TEST accel_copy 00:07:28.241 ************************************ 00:07:28.241 21:59:10 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:07:28.241 21:59:10 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.241 21:59:10 -- accel/accel.sh@17 -- # local accel_module 00:07:28.241 21:59:10 -- accel/accel.sh@19 -- # IFS=: 00:07:28.241 21:59:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:28.241 21:59:10 -- accel/accel.sh@19 -- # read -r var val 00:07:28.241 21:59:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:28.241 21:59:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.241 21:59:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.241 21:59:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.241 21:59:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.241 21:59:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.241 21:59:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.241 21:59:10 -- accel/accel.sh@40 -- # local IFS=, 00:07:28.241 21:59:10 -- accel/accel.sh@41 -- # jq -r . 00:07:28.241 [2024-04-24 21:59:10.474850] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:28.241 [2024-04-24 21:59:10.474927] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3847191 ] 00:07:28.500 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.500 [2024-04-24 21:59:10.550093] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.500 [2024-04-24 21:59:10.677926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.500 21:59:10 -- accel/accel.sh@20 -- # val= 00:07:28.500 21:59:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # IFS=: 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # read -r var val 00:07:28.500 21:59:10 -- accel/accel.sh@20 -- # val= 00:07:28.500 21:59:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # IFS=: 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # read -r var val 00:07:28.500 21:59:10 -- accel/accel.sh@20 -- # val=0x1 00:07:28.500 21:59:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # IFS=: 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # read -r var val 00:07:28.500 21:59:10 -- accel/accel.sh@20 -- # val= 00:07:28.500 21:59:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # IFS=: 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # read -r var val 00:07:28.500 21:59:10 -- accel/accel.sh@20 -- # val= 00:07:28.500 21:59:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # IFS=: 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # read -r var val 00:07:28.500 21:59:10 -- accel/accel.sh@20 -- # val=copy 00:07:28.500 21:59:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.500 21:59:10 -- accel/accel.sh@23 -- # accel_opc=copy 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # IFS=: 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # read -r var val 00:07:28.500 21:59:10 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.500 21:59:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # IFS=: 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # read -r var val 00:07:28.500 21:59:10 -- accel/accel.sh@20 -- # val= 00:07:28.500 21:59:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # IFS=: 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # read -r var val 00:07:28.500 21:59:10 -- accel/accel.sh@20 -- # val=software 00:07:28.500 21:59:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.500 21:59:10 -- accel/accel.sh@22 -- # accel_module=software 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # IFS=: 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # read -r var val 00:07:28.500 21:59:10 -- accel/accel.sh@20 -- # val=32 00:07:28.500 21:59:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # IFS=: 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # read -r var val 00:07:28.500 21:59:10 -- accel/accel.sh@20 -- # val=32 00:07:28.500 21:59:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # IFS=: 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # read -r var val 00:07:28.500 21:59:10 -- accel/accel.sh@20 -- # val=1 00:07:28.500 21:59:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # IFS=: 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # read -r var val 00:07:28.500 21:59:10 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.500 21:59:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # IFS=: 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # read -r var val 00:07:28.500 21:59:10 -- accel/accel.sh@20 -- # val=Yes 00:07:28.500 21:59:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # IFS=: 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # read -r var val 00:07:28.500 21:59:10 -- accel/accel.sh@20 -- # val= 00:07:28.500 21:59:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # IFS=: 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # read -r var val 00:07:28.500 21:59:10 -- accel/accel.sh@20 -- # val= 00:07:28.500 21:59:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # IFS=: 00:07:28.500 21:59:10 -- accel/accel.sh@19 -- # read -r var val 00:07:29.874 21:59:11 -- accel/accel.sh@20 -- # val= 00:07:29.874 21:59:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.874 21:59:11 -- accel/accel.sh@19 -- # IFS=: 00:07:29.874 21:59:11 -- accel/accel.sh@19 -- # read -r var val 00:07:29.874 21:59:11 -- accel/accel.sh@20 -- # val= 00:07:29.874 21:59:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.874 21:59:11 -- accel/accel.sh@19 -- # IFS=: 00:07:29.874 21:59:11 -- accel/accel.sh@19 -- # read -r var val 00:07:29.874 21:59:11 -- accel/accel.sh@20 -- # val= 00:07:29.874 21:59:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.874 21:59:11 -- accel/accel.sh@19 -- # IFS=: 00:07:29.874 21:59:11 -- accel/accel.sh@19 -- # read -r var val 00:07:29.874 21:59:11 -- accel/accel.sh@20 -- # val= 00:07:29.874 21:59:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.874 21:59:11 -- accel/accel.sh@19 -- # IFS=: 00:07:29.874 21:59:11 -- accel/accel.sh@19 -- # read -r var val 00:07:29.874 21:59:11 -- accel/accel.sh@20 -- # val= 00:07:29.874 21:59:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.874 21:59:11 -- accel/accel.sh@19 -- # IFS=: 00:07:29.874 21:59:11 -- accel/accel.sh@19 -- # read -r var val 00:07:29.874 21:59:11 -- accel/accel.sh@20 -- # val= 00:07:29.874 21:59:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.874 21:59:11 -- accel/accel.sh@19 -- # IFS=: 00:07:29.874 21:59:11 -- accel/accel.sh@19 -- # read -r var val 00:07:29.874 21:59:11 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.874 21:59:11 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:29.874 21:59:11 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.874 00:07:29.874 real 0m1.510s 00:07:29.874 user 0m1.350s 00:07:29.874 sys 0m0.161s 00:07:29.874 21:59:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:29.874 21:59:11 -- common/autotest_common.sh@10 -- # set +x 00:07:29.874 ************************************ 00:07:29.874 END TEST accel_copy 00:07:29.874 ************************************ 00:07:29.874 21:59:11 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:29.874 21:59:11 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:29.874 21:59:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.874 21:59:11 -- common/autotest_common.sh@10 -- # set +x 00:07:29.874 ************************************ 00:07:29.874 START TEST accel_fill 00:07:29.874 ************************************ 00:07:29.874 21:59:12 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:29.874 21:59:12 -- accel/accel.sh@16 -- # local accel_opc 00:07:29.874 21:59:12 -- accel/accel.sh@17 -- # local accel_module 00:07:29.874 21:59:12 -- accel/accel.sh@19 -- # IFS=: 00:07:29.874 21:59:12 -- accel/accel.sh@19 -- # read -r var val 00:07:29.874 21:59:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:29.874 21:59:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:29.874 21:59:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.874 21:59:12 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.874 21:59:12 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.874 21:59:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.874 21:59:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.874 21:59:12 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.874 21:59:12 -- accel/accel.sh@40 -- # local IFS=, 00:07:29.874 21:59:12 -- accel/accel.sh@41 -- # jq -r . 00:07:29.874 [2024-04-24 21:59:12.101986] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:29.874 [2024-04-24 21:59:12.102054] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3847413 ] 00:07:30.133 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.133 [2024-04-24 21:59:12.170181] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.133 [2024-04-24 21:59:12.291144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.133 21:59:12 -- accel/accel.sh@20 -- # val= 00:07:30.133 21:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.133 21:59:12 -- accel/accel.sh@19 -- # IFS=: 00:07:30.133 21:59:12 -- accel/accel.sh@19 -- # read -r var val 00:07:30.133 21:59:12 -- accel/accel.sh@20 -- # val= 00:07:30.133 21:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.133 21:59:12 -- accel/accel.sh@19 -- # IFS=: 00:07:30.133 21:59:12 -- accel/accel.sh@19 -- # read -r var val 00:07:30.133 21:59:12 -- accel/accel.sh@20 -- # val=0x1 00:07:30.133 21:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.133 21:59:12 -- accel/accel.sh@19 -- # IFS=: 00:07:30.133 21:59:12 -- accel/accel.sh@19 -- # read -r var val 00:07:30.133 21:59:12 -- accel/accel.sh@20 -- # val= 00:07:30.133 21:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.133 21:59:12 -- accel/accel.sh@19 -- # IFS=: 00:07:30.133 21:59:12 -- accel/accel.sh@19 -- # read -r var val 00:07:30.133 21:59:12 -- accel/accel.sh@20 -- # val= 00:07:30.133 21:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.133 21:59:12 -- accel/accel.sh@19 -- # IFS=: 00:07:30.133 21:59:12 -- accel/accel.sh@19 -- # read -r var val 00:07:30.133 21:59:12 -- accel/accel.sh@20 -- # val=fill 00:07:30.133 21:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.133 21:59:12 -- accel/accel.sh@23 -- # accel_opc=fill 00:07:30.133 21:59:12 -- accel/accel.sh@19 -- # IFS=: 00:07:30.133 21:59:12 -- accel/accel.sh@19 -- # read -r var val 00:07:30.133 21:59:12 -- accel/accel.sh@20 -- # val=0x80 00:07:30.133 21:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.133 21:59:12 -- accel/accel.sh@19 -- # IFS=: 00:07:30.133 21:59:12 -- accel/accel.sh@19 -- # read -r var val 00:07:30.133 21:59:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.133 21:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.133 21:59:12 -- accel/accel.sh@19 -- # IFS=: 00:07:30.133 21:59:12 -- accel/accel.sh@19 -- # read -r var val 00:07:30.133 21:59:12 -- accel/accel.sh@20 -- # val= 00:07:30.133 21:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.133 21:59:12 -- accel/accel.sh@19 -- # IFS=: 00:07:30.133 21:59:12 -- accel/accel.sh@19 -- # read -r var val 00:07:30.133 21:59:12 -- accel/accel.sh@20 -- # val=software 00:07:30.133 21:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.133 21:59:12 -- accel/accel.sh@22 -- # accel_module=software 00:07:30.133 21:59:12 -- accel/accel.sh@19 -- # IFS=: 00:07:30.133 21:59:12 -- accel/accel.sh@19 -- # read -r var val 00:07:30.133 21:59:12 -- accel/accel.sh@20 -- # val=64 00:07:30.133 21:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.133 21:59:12 -- accel/accel.sh@19 -- # IFS=: 00:07:30.133 21:59:12 -- accel/accel.sh@19 -- # read -r var val 00:07:30.133 21:59:12 -- accel/accel.sh@20 -- # val=64 00:07:30.133 21:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.133 21:59:12 -- accel/accel.sh@19 -- # IFS=: 00:07:30.133 21:59:12 -- accel/accel.sh@19 -- # read -r var val 00:07:30.133 21:59:12 -- accel/accel.sh@20 -- # val=1 00:07:30.133 21:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.133 21:59:12 -- accel/accel.sh@19 -- # IFS=: 00:07:30.133 21:59:12 -- accel/accel.sh@19 -- # read -r var val 00:07:30.133 21:59:12 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.133 21:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.133 21:59:12 -- accel/accel.sh@19 -- # IFS=: 00:07:30.133 21:59:12 -- accel/accel.sh@19 -- # read -r var val 00:07:30.133 21:59:12 -- accel/accel.sh@20 -- # val=Yes 00:07:30.134 21:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.134 21:59:12 -- accel/accel.sh@19 -- # IFS=: 00:07:30.134 21:59:12 -- accel/accel.sh@19 -- # read -r var val 00:07:30.134 21:59:12 -- accel/accel.sh@20 -- # val= 00:07:30.134 21:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.134 21:59:12 -- accel/accel.sh@19 -- # IFS=: 00:07:30.134 21:59:12 -- accel/accel.sh@19 -- # read -r var val 00:07:30.134 21:59:12 -- accel/accel.sh@20 -- # val= 00:07:30.134 21:59:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.134 21:59:12 -- accel/accel.sh@19 -- # IFS=: 00:07:30.134 21:59:12 -- accel/accel.sh@19 -- # read -r var val 00:07:31.506 21:59:13 -- accel/accel.sh@20 -- # val= 00:07:31.507 21:59:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.507 21:59:13 -- accel/accel.sh@19 -- # IFS=: 00:07:31.507 21:59:13 -- accel/accel.sh@19 -- # read -r var val 00:07:31.507 21:59:13 -- accel/accel.sh@20 -- # val= 00:07:31.507 21:59:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.507 21:59:13 -- accel/accel.sh@19 -- # IFS=: 00:07:31.507 21:59:13 -- accel/accel.sh@19 -- # read -r var val 00:07:31.507 21:59:13 -- accel/accel.sh@20 -- # val= 00:07:31.507 21:59:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.507 21:59:13 -- accel/accel.sh@19 -- # IFS=: 00:07:31.507 21:59:13 -- accel/accel.sh@19 -- # read -r var val 00:07:31.507 21:59:13 -- accel/accel.sh@20 -- # val= 00:07:31.507 21:59:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.507 21:59:13 -- accel/accel.sh@19 -- # IFS=: 00:07:31.507 21:59:13 -- accel/accel.sh@19 -- # read -r var val 00:07:31.507 21:59:13 -- accel/accel.sh@20 -- # val= 00:07:31.507 21:59:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.507 21:59:13 -- accel/accel.sh@19 -- # IFS=: 00:07:31.507 21:59:13 -- accel/accel.sh@19 -- # read -r var val 00:07:31.507 21:59:13 -- accel/accel.sh@20 -- # val= 00:07:31.507 21:59:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.507 21:59:13 -- accel/accel.sh@19 -- # IFS=: 00:07:31.507 21:59:13 -- accel/accel.sh@19 -- # read -r var val 00:07:31.507 21:59:13 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.507 21:59:13 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:31.507 21:59:13 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.507 00:07:31.507 real 0m1.492s 00:07:31.507 user 0m1.339s 00:07:31.507 sys 0m0.154s 00:07:31.507 21:59:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:31.507 21:59:13 -- common/autotest_common.sh@10 -- # set +x 00:07:31.507 ************************************ 00:07:31.507 END TEST accel_fill 00:07:31.507 ************************************ 00:07:31.507 21:59:13 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:31.507 21:59:13 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:31.507 21:59:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.507 21:59:13 -- common/autotest_common.sh@10 -- # set +x 00:07:31.507 ************************************ 00:07:31.507 START TEST accel_copy_crc32c 00:07:31.507 ************************************ 00:07:31.507 21:59:13 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:07:31.507 21:59:13 -- accel/accel.sh@16 -- # local accel_opc 00:07:31.507 21:59:13 -- accel/accel.sh@17 -- # local accel_module 00:07:31.507 21:59:13 -- accel/accel.sh@19 -- # IFS=: 00:07:31.507 21:59:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:31.507 21:59:13 -- accel/accel.sh@19 -- # read -r var val 00:07:31.507 21:59:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:31.507 21:59:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.507 21:59:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.507 21:59:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.507 21:59:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.507 21:59:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.507 21:59:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.507 21:59:13 -- accel/accel.sh@40 -- # local IFS=, 00:07:31.507 21:59:13 -- accel/accel.sh@41 -- # jq -r . 00:07:31.507 [2024-04-24 21:59:13.756181] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:31.507 [2024-04-24 21:59:13.756251] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3847639 ] 00:07:31.766 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.766 [2024-04-24 21:59:13.824701] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.766 [2024-04-24 21:59:13.947215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.766 21:59:14 -- accel/accel.sh@20 -- # val= 00:07:31.766 21:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # IFS=: 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # read -r var val 00:07:31.766 21:59:14 -- accel/accel.sh@20 -- # val= 00:07:31.766 21:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # IFS=: 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # read -r var val 00:07:31.766 21:59:14 -- accel/accel.sh@20 -- # val=0x1 00:07:31.766 21:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # IFS=: 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # read -r var val 00:07:31.766 21:59:14 -- accel/accel.sh@20 -- # val= 00:07:31.766 21:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # IFS=: 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # read -r var val 00:07:31.766 21:59:14 -- accel/accel.sh@20 -- # val= 00:07:31.766 21:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # IFS=: 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # read -r var val 00:07:31.766 21:59:14 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:31.766 21:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.766 21:59:14 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # IFS=: 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # read -r var val 00:07:31.766 21:59:14 -- accel/accel.sh@20 -- # val=0 00:07:31.766 21:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # IFS=: 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # read -r var val 00:07:31.766 21:59:14 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.766 21:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # IFS=: 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # read -r var val 00:07:31.766 21:59:14 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.766 21:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # IFS=: 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # read -r var val 00:07:31.766 21:59:14 -- accel/accel.sh@20 -- # val= 00:07:31.766 21:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # IFS=: 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # read -r var val 00:07:31.766 21:59:14 -- accel/accel.sh@20 -- # val=software 00:07:31.766 21:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.766 21:59:14 -- accel/accel.sh@22 -- # accel_module=software 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # IFS=: 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # read -r var val 00:07:31.766 21:59:14 -- accel/accel.sh@20 -- # val=32 00:07:31.766 21:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # IFS=: 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # read -r var val 00:07:31.766 21:59:14 -- accel/accel.sh@20 -- # val=32 00:07:31.766 21:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # IFS=: 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # read -r var val 00:07:31.766 21:59:14 -- accel/accel.sh@20 -- # val=1 00:07:31.766 21:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # IFS=: 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # read -r var val 00:07:31.766 21:59:14 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.766 21:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # IFS=: 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # read -r var val 00:07:31.766 21:59:14 -- accel/accel.sh@20 -- # val=Yes 00:07:31.766 21:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # IFS=: 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # read -r var val 00:07:31.766 21:59:14 -- accel/accel.sh@20 -- # val= 00:07:31.766 21:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # IFS=: 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # read -r var val 00:07:31.766 21:59:14 -- accel/accel.sh@20 -- # val= 00:07:31.766 21:59:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # IFS=: 00:07:31.766 21:59:14 -- accel/accel.sh@19 -- # read -r var val 00:07:33.142 21:59:15 -- accel/accel.sh@20 -- # val= 00:07:33.142 21:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.142 21:59:15 -- accel/accel.sh@19 -- # IFS=: 00:07:33.142 21:59:15 -- accel/accel.sh@19 -- # read -r var val 00:07:33.142 21:59:15 -- accel/accel.sh@20 -- # val= 00:07:33.142 21:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.142 21:59:15 -- accel/accel.sh@19 -- # IFS=: 00:07:33.142 21:59:15 -- accel/accel.sh@19 -- # read -r var val 00:07:33.142 21:59:15 -- accel/accel.sh@20 -- # val= 00:07:33.142 21:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.142 21:59:15 -- accel/accel.sh@19 -- # IFS=: 00:07:33.142 21:59:15 -- accel/accel.sh@19 -- # read -r var val 00:07:33.142 21:59:15 -- accel/accel.sh@20 -- # val= 00:07:33.142 21:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.142 21:59:15 -- accel/accel.sh@19 -- # IFS=: 00:07:33.142 21:59:15 -- accel/accel.sh@19 -- # read -r var val 00:07:33.142 21:59:15 -- accel/accel.sh@20 -- # val= 00:07:33.142 21:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.142 21:59:15 -- accel/accel.sh@19 -- # IFS=: 00:07:33.142 21:59:15 -- accel/accel.sh@19 -- # read -r var val 00:07:33.142 21:59:15 -- accel/accel.sh@20 -- # val= 00:07:33.142 21:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.142 21:59:15 -- accel/accel.sh@19 -- # IFS=: 00:07:33.142 21:59:15 -- accel/accel.sh@19 -- # read -r var val 00:07:33.142 21:59:15 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.142 21:59:15 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:33.142 21:59:15 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.142 00:07:33.142 real 0m1.495s 00:07:33.142 user 0m1.336s 00:07:33.142 sys 0m0.161s 00:07:33.142 21:59:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:33.142 21:59:15 -- common/autotest_common.sh@10 -- # set +x 00:07:33.142 ************************************ 00:07:33.142 END TEST accel_copy_crc32c 00:07:33.142 ************************************ 00:07:33.142 21:59:15 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:33.142 21:59:15 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:33.142 21:59:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.142 21:59:15 -- common/autotest_common.sh@10 -- # set +x 00:07:33.142 ************************************ 00:07:33.142 START TEST accel_copy_crc32c_C2 00:07:33.142 ************************************ 00:07:33.142 21:59:15 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:33.142 21:59:15 -- accel/accel.sh@16 -- # local accel_opc 00:07:33.142 21:59:15 -- accel/accel.sh@17 -- # local accel_module 00:07:33.142 21:59:15 -- accel/accel.sh@19 -- # IFS=: 00:07:33.142 21:59:15 -- accel/accel.sh@19 -- # read -r var val 00:07:33.142 21:59:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:33.142 21:59:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:33.142 21:59:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.142 21:59:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.142 21:59:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.142 21:59:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.142 21:59:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.142 21:59:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.142 21:59:15 -- accel/accel.sh@40 -- # local IFS=, 00:07:33.142 21:59:15 -- accel/accel.sh@41 -- # jq -r . 00:07:33.142 [2024-04-24 21:59:15.364976] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:33.142 [2024-04-24 21:59:15.365036] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3847813 ] 00:07:33.142 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.400 [2024-04-24 21:59:15.430842] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.400 [2024-04-24 21:59:15.550362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.400 21:59:15 -- accel/accel.sh@20 -- # val= 00:07:33.400 21:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.400 21:59:15 -- accel/accel.sh@19 -- # IFS=: 00:07:33.400 21:59:15 -- accel/accel.sh@19 -- # read -r var val 00:07:33.400 21:59:15 -- accel/accel.sh@20 -- # val= 00:07:33.400 21:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.400 21:59:15 -- accel/accel.sh@19 -- # IFS=: 00:07:33.400 21:59:15 -- accel/accel.sh@19 -- # read -r var val 00:07:33.400 21:59:15 -- accel/accel.sh@20 -- # val=0x1 00:07:33.400 21:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.400 21:59:15 -- accel/accel.sh@19 -- # IFS=: 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # read -r var val 00:07:33.401 21:59:15 -- accel/accel.sh@20 -- # val= 00:07:33.401 21:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # IFS=: 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # read -r var val 00:07:33.401 21:59:15 -- accel/accel.sh@20 -- # val= 00:07:33.401 21:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # IFS=: 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # read -r var val 00:07:33.401 21:59:15 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:33.401 21:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.401 21:59:15 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # IFS=: 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # read -r var val 00:07:33.401 21:59:15 -- accel/accel.sh@20 -- # val=0 00:07:33.401 21:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # IFS=: 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # read -r var val 00:07:33.401 21:59:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.401 21:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # IFS=: 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # read -r var val 00:07:33.401 21:59:15 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:33.401 21:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # IFS=: 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # read -r var val 00:07:33.401 21:59:15 -- accel/accel.sh@20 -- # val= 00:07:33.401 21:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # IFS=: 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # read -r var val 00:07:33.401 21:59:15 -- accel/accel.sh@20 -- # val=software 00:07:33.401 21:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.401 21:59:15 -- accel/accel.sh@22 -- # accel_module=software 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # IFS=: 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # read -r var val 00:07:33.401 21:59:15 -- accel/accel.sh@20 -- # val=32 00:07:33.401 21:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # IFS=: 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # read -r var val 00:07:33.401 21:59:15 -- accel/accel.sh@20 -- # val=32 00:07:33.401 21:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # IFS=: 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # read -r var val 00:07:33.401 21:59:15 -- accel/accel.sh@20 -- # val=1 00:07:33.401 21:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # IFS=: 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # read -r var val 00:07:33.401 21:59:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.401 21:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # IFS=: 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # read -r var val 00:07:33.401 21:59:15 -- accel/accel.sh@20 -- # val=Yes 00:07:33.401 21:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # IFS=: 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # read -r var val 00:07:33.401 21:59:15 -- accel/accel.sh@20 -- # val= 00:07:33.401 21:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # IFS=: 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # read -r var val 00:07:33.401 21:59:15 -- accel/accel.sh@20 -- # val= 00:07:33.401 21:59:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # IFS=: 00:07:33.401 21:59:15 -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 21:59:16 -- accel/accel.sh@20 -- # val= 00:07:34.774 21:59:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.774 21:59:16 -- accel/accel.sh@19 -- # IFS=: 00:07:34.774 21:59:16 -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 21:59:16 -- accel/accel.sh@20 -- # val= 00:07:34.774 21:59:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.774 21:59:16 -- accel/accel.sh@19 -- # IFS=: 00:07:34.774 21:59:16 -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 21:59:16 -- accel/accel.sh@20 -- # val= 00:07:34.774 21:59:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.774 21:59:16 -- accel/accel.sh@19 -- # IFS=: 00:07:34.774 21:59:16 -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 21:59:16 -- accel/accel.sh@20 -- # val= 00:07:34.774 21:59:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.774 21:59:16 -- accel/accel.sh@19 -- # IFS=: 00:07:34.774 21:59:16 -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 21:59:16 -- accel/accel.sh@20 -- # val= 00:07:34.774 21:59:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.774 21:59:16 -- accel/accel.sh@19 -- # IFS=: 00:07:34.774 21:59:16 -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 21:59:16 -- accel/accel.sh@20 -- # val= 00:07:34.774 21:59:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.774 21:59:16 -- accel/accel.sh@19 -- # IFS=: 00:07:34.774 21:59:16 -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 21:59:16 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.774 21:59:16 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:34.774 21:59:16 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.774 00:07:34.774 real 0m1.481s 00:07:34.774 user 0m1.341s 00:07:34.774 sys 0m0.142s 00:07:34.774 21:59:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:34.774 21:59:16 -- common/autotest_common.sh@10 -- # set +x 00:07:34.774 ************************************ 00:07:34.774 END TEST accel_copy_crc32c_C2 00:07:34.774 ************************************ 00:07:34.774 21:59:16 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:34.774 21:59:16 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:34.774 21:59:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.774 21:59:16 -- common/autotest_common.sh@10 -- # set +x 00:07:34.774 ************************************ 00:07:34.774 START TEST accel_dualcast 00:07:34.774 ************************************ 00:07:34.774 21:59:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:07:34.774 21:59:16 -- accel/accel.sh@16 -- # local accel_opc 00:07:34.774 21:59:16 -- accel/accel.sh@17 -- # local accel_module 00:07:34.774 21:59:16 -- accel/accel.sh@19 -- # IFS=: 00:07:34.774 21:59:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:34.774 21:59:16 -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 21:59:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:34.774 21:59:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.774 21:59:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.774 21:59:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.775 21:59:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.775 21:59:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.775 21:59:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.775 21:59:16 -- accel/accel.sh@40 -- # local IFS=, 00:07:34.775 21:59:16 -- accel/accel.sh@41 -- # jq -r . 00:07:34.775 [2024-04-24 21:59:16.968688] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:34.775 [2024-04-24 21:59:16.968753] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3848094 ] 00:07:34.775 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.033 [2024-04-24 21:59:17.034803] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.033 [2024-04-24 21:59:17.155564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.033 21:59:17 -- accel/accel.sh@20 -- # val= 00:07:35.033 21:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # IFS=: 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # read -r var val 00:07:35.033 21:59:17 -- accel/accel.sh@20 -- # val= 00:07:35.033 21:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # IFS=: 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # read -r var val 00:07:35.033 21:59:17 -- accel/accel.sh@20 -- # val=0x1 00:07:35.033 21:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # IFS=: 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # read -r var val 00:07:35.033 21:59:17 -- accel/accel.sh@20 -- # val= 00:07:35.033 21:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # IFS=: 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # read -r var val 00:07:35.033 21:59:17 -- accel/accel.sh@20 -- # val= 00:07:35.033 21:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # IFS=: 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # read -r var val 00:07:35.033 21:59:17 -- accel/accel.sh@20 -- # val=dualcast 00:07:35.033 21:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.033 21:59:17 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # IFS=: 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # read -r var val 00:07:35.033 21:59:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.033 21:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # IFS=: 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # read -r var val 00:07:35.033 21:59:17 -- accel/accel.sh@20 -- # val= 00:07:35.033 21:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # IFS=: 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # read -r var val 00:07:35.033 21:59:17 -- accel/accel.sh@20 -- # val=software 00:07:35.033 21:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.033 21:59:17 -- accel/accel.sh@22 -- # accel_module=software 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # IFS=: 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # read -r var val 00:07:35.033 21:59:17 -- accel/accel.sh@20 -- # val=32 00:07:35.033 21:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # IFS=: 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # read -r var val 00:07:35.033 21:59:17 -- accel/accel.sh@20 -- # val=32 00:07:35.033 21:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # IFS=: 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # read -r var val 00:07:35.033 21:59:17 -- accel/accel.sh@20 -- # val=1 00:07:35.033 21:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # IFS=: 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # read -r var val 00:07:35.033 21:59:17 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.033 21:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # IFS=: 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # read -r var val 00:07:35.033 21:59:17 -- accel/accel.sh@20 -- # val=Yes 00:07:35.033 21:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # IFS=: 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # read -r var val 00:07:35.033 21:59:17 -- accel/accel.sh@20 -- # val= 00:07:35.033 21:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # IFS=: 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # read -r var val 00:07:35.033 21:59:17 -- accel/accel.sh@20 -- # val= 00:07:35.033 21:59:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # IFS=: 00:07:35.033 21:59:17 -- accel/accel.sh@19 -- # read -r var val 00:07:36.406 21:59:18 -- accel/accel.sh@20 -- # val= 00:07:36.406 21:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.406 21:59:18 -- accel/accel.sh@19 -- # IFS=: 00:07:36.406 21:59:18 -- accel/accel.sh@19 -- # read -r var val 00:07:36.406 21:59:18 -- accel/accel.sh@20 -- # val= 00:07:36.406 21:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.406 21:59:18 -- accel/accel.sh@19 -- # IFS=: 00:07:36.406 21:59:18 -- accel/accel.sh@19 -- # read -r var val 00:07:36.406 21:59:18 -- accel/accel.sh@20 -- # val= 00:07:36.406 21:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.406 21:59:18 -- accel/accel.sh@19 -- # IFS=: 00:07:36.406 21:59:18 -- accel/accel.sh@19 -- # read -r var val 00:07:36.406 21:59:18 -- accel/accel.sh@20 -- # val= 00:07:36.406 21:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.406 21:59:18 -- accel/accel.sh@19 -- # IFS=: 00:07:36.406 21:59:18 -- accel/accel.sh@19 -- # read -r var val 00:07:36.406 21:59:18 -- accel/accel.sh@20 -- # val= 00:07:36.406 21:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.406 21:59:18 -- accel/accel.sh@19 -- # IFS=: 00:07:36.406 21:59:18 -- accel/accel.sh@19 -- # read -r var val 00:07:36.406 21:59:18 -- accel/accel.sh@20 -- # val= 00:07:36.406 21:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.406 21:59:18 -- accel/accel.sh@19 -- # IFS=: 00:07:36.406 21:59:18 -- accel/accel.sh@19 -- # read -r var val 00:07:36.406 21:59:18 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.406 21:59:18 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:36.406 21:59:18 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.406 00:07:36.406 real 0m1.490s 00:07:36.406 user 0m1.348s 00:07:36.406 sys 0m0.142s 00:07:36.406 21:59:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:36.406 21:59:18 -- common/autotest_common.sh@10 -- # set +x 00:07:36.406 ************************************ 00:07:36.406 END TEST accel_dualcast 00:07:36.406 ************************************ 00:07:36.406 21:59:18 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:36.406 21:59:18 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:36.406 21:59:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.406 21:59:18 -- common/autotest_common.sh@10 -- # set +x 00:07:36.406 ************************************ 00:07:36.406 START TEST accel_compare 00:07:36.406 ************************************ 00:07:36.406 21:59:18 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:07:36.406 21:59:18 -- accel/accel.sh@16 -- # local accel_opc 00:07:36.406 21:59:18 -- accel/accel.sh@17 -- # local accel_module 00:07:36.406 21:59:18 -- accel/accel.sh@19 -- # IFS=: 00:07:36.406 21:59:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:36.406 21:59:18 -- accel/accel.sh@19 -- # read -r var val 00:07:36.406 21:59:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:36.406 21:59:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.406 21:59:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.406 21:59:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.406 21:59:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.406 21:59:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.406 21:59:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.406 21:59:18 -- accel/accel.sh@40 -- # local IFS=, 00:07:36.406 21:59:18 -- accel/accel.sh@41 -- # jq -r . 00:07:36.406 [2024-04-24 21:59:18.589965] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:36.406 [2024-04-24 21:59:18.590026] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3848251 ] 00:07:36.406 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.406 [2024-04-24 21:59:18.656446] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.665 [2024-04-24 21:59:18.778080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.665 21:59:18 -- accel/accel.sh@20 -- # val= 00:07:36.665 21:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.665 21:59:18 -- accel/accel.sh@19 -- # IFS=: 00:07:36.665 21:59:18 -- accel/accel.sh@19 -- # read -r var val 00:07:36.665 21:59:18 -- accel/accel.sh@20 -- # val= 00:07:36.665 21:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.665 21:59:18 -- accel/accel.sh@19 -- # IFS=: 00:07:36.665 21:59:18 -- accel/accel.sh@19 -- # read -r var val 00:07:36.665 21:59:18 -- accel/accel.sh@20 -- # val=0x1 00:07:36.665 21:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.665 21:59:18 -- accel/accel.sh@19 -- # IFS=: 00:07:36.665 21:59:18 -- accel/accel.sh@19 -- # read -r var val 00:07:36.665 21:59:18 -- accel/accel.sh@20 -- # val= 00:07:36.665 21:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.665 21:59:18 -- accel/accel.sh@19 -- # IFS=: 00:07:36.665 21:59:18 -- accel/accel.sh@19 -- # read -r var val 00:07:36.665 21:59:18 -- accel/accel.sh@20 -- # val= 00:07:36.665 21:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.665 21:59:18 -- accel/accel.sh@19 -- # IFS=: 00:07:36.665 21:59:18 -- accel/accel.sh@19 -- # read -r var val 00:07:36.665 21:59:18 -- accel/accel.sh@20 -- # val=compare 00:07:36.665 21:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.665 21:59:18 -- accel/accel.sh@23 -- # accel_opc=compare 00:07:36.665 21:59:18 -- accel/accel.sh@19 -- # IFS=: 00:07:36.665 21:59:18 -- accel/accel.sh@19 -- # read -r var val 00:07:36.665 21:59:18 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.665 21:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.665 21:59:18 -- accel/accel.sh@19 -- # IFS=: 00:07:36.665 21:59:18 -- accel/accel.sh@19 -- # read -r var val 00:07:36.665 21:59:18 -- accel/accel.sh@20 -- # val= 00:07:36.665 21:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.665 21:59:18 -- accel/accel.sh@19 -- # IFS=: 00:07:36.665 21:59:18 -- accel/accel.sh@19 -- # read -r var val 00:07:36.665 21:59:18 -- accel/accel.sh@20 -- # val=software 00:07:36.665 21:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.665 21:59:18 -- accel/accel.sh@22 -- # accel_module=software 00:07:36.665 21:59:18 -- accel/accel.sh@19 -- # IFS=: 00:07:36.665 21:59:18 -- accel/accel.sh@19 -- # read -r var val 00:07:36.665 21:59:18 -- accel/accel.sh@20 -- # val=32 00:07:36.665 21:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.665 21:59:18 -- accel/accel.sh@19 -- # IFS=: 00:07:36.665 21:59:18 -- accel/accel.sh@19 -- # read -r var val 00:07:36.665 21:59:18 -- accel/accel.sh@20 -- # val=32 00:07:36.666 21:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.666 21:59:18 -- accel/accel.sh@19 -- # IFS=: 00:07:36.666 21:59:18 -- accel/accel.sh@19 -- # read -r var val 00:07:36.666 21:59:18 -- accel/accel.sh@20 -- # val=1 00:07:36.666 21:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.666 21:59:18 -- accel/accel.sh@19 -- # IFS=: 00:07:36.666 21:59:18 -- accel/accel.sh@19 -- # read -r var val 00:07:36.666 21:59:18 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.666 21:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.666 21:59:18 -- accel/accel.sh@19 -- # IFS=: 00:07:36.666 21:59:18 -- accel/accel.sh@19 -- # read -r var val 00:07:36.666 21:59:18 -- accel/accel.sh@20 -- # val=Yes 00:07:36.666 21:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.666 21:59:18 -- accel/accel.sh@19 -- # IFS=: 00:07:36.666 21:59:18 -- accel/accel.sh@19 -- # read -r var val 00:07:36.666 21:59:18 -- accel/accel.sh@20 -- # val= 00:07:36.666 21:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.666 21:59:18 -- accel/accel.sh@19 -- # IFS=: 00:07:36.666 21:59:18 -- accel/accel.sh@19 -- # read -r var val 00:07:36.666 21:59:18 -- accel/accel.sh@20 -- # val= 00:07:36.666 21:59:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.666 21:59:18 -- accel/accel.sh@19 -- # IFS=: 00:07:36.666 21:59:18 -- accel/accel.sh@19 -- # read -r var val 00:07:38.044 21:59:20 -- accel/accel.sh@20 -- # val= 00:07:38.044 21:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.044 21:59:20 -- accel/accel.sh@19 -- # IFS=: 00:07:38.044 21:59:20 -- accel/accel.sh@19 -- # read -r var val 00:07:38.044 21:59:20 -- accel/accel.sh@20 -- # val= 00:07:38.044 21:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.044 21:59:20 -- accel/accel.sh@19 -- # IFS=: 00:07:38.044 21:59:20 -- accel/accel.sh@19 -- # read -r var val 00:07:38.044 21:59:20 -- accel/accel.sh@20 -- # val= 00:07:38.044 21:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.044 21:59:20 -- accel/accel.sh@19 -- # IFS=: 00:07:38.044 21:59:20 -- accel/accel.sh@19 -- # read -r var val 00:07:38.044 21:59:20 -- accel/accel.sh@20 -- # val= 00:07:38.044 21:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.044 21:59:20 -- accel/accel.sh@19 -- # IFS=: 00:07:38.044 21:59:20 -- accel/accel.sh@19 -- # read -r var val 00:07:38.044 21:59:20 -- accel/accel.sh@20 -- # val= 00:07:38.044 21:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.044 21:59:20 -- accel/accel.sh@19 -- # IFS=: 00:07:38.044 21:59:20 -- accel/accel.sh@19 -- # read -r var val 00:07:38.044 21:59:20 -- accel/accel.sh@20 -- # val= 00:07:38.044 21:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.044 21:59:20 -- accel/accel.sh@19 -- # IFS=: 00:07:38.044 21:59:20 -- accel/accel.sh@19 -- # read -r var val 00:07:38.044 21:59:20 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.044 21:59:20 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:38.044 21:59:20 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.044 00:07:38.044 real 0m1.489s 00:07:38.044 user 0m1.342s 00:07:38.044 sys 0m0.148s 00:07:38.044 21:59:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:38.044 21:59:20 -- common/autotest_common.sh@10 -- # set +x 00:07:38.044 ************************************ 00:07:38.044 END TEST accel_compare 00:07:38.044 ************************************ 00:07:38.044 21:59:20 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:38.044 21:59:20 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:38.044 21:59:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.044 21:59:20 -- common/autotest_common.sh@10 -- # set +x 00:07:38.044 ************************************ 00:07:38.044 START TEST accel_xor 00:07:38.044 ************************************ 00:07:38.044 21:59:20 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:07:38.044 21:59:20 -- accel/accel.sh@16 -- # local accel_opc 00:07:38.044 21:59:20 -- accel/accel.sh@17 -- # local accel_module 00:07:38.044 21:59:20 -- accel/accel.sh@19 -- # IFS=: 00:07:38.044 21:59:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:38.044 21:59:20 -- accel/accel.sh@19 -- # read -r var val 00:07:38.044 21:59:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:38.044 21:59:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.044 21:59:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.044 21:59:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.044 21:59:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.044 21:59:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.044 21:59:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.044 21:59:20 -- accel/accel.sh@40 -- # local IFS=, 00:07:38.044 21:59:20 -- accel/accel.sh@41 -- # jq -r . 00:07:38.044 [2024-04-24 21:59:20.235931] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:38.044 [2024-04-24 21:59:20.235996] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3848540 ] 00:07:38.044 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.377 [2024-04-24 21:59:20.303281] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.377 [2024-04-24 21:59:20.430451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.377 21:59:20 -- accel/accel.sh@20 -- # val= 00:07:38.377 21:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.377 21:59:20 -- accel/accel.sh@19 -- # IFS=: 00:07:38.377 21:59:20 -- accel/accel.sh@19 -- # read -r var val 00:07:38.377 21:59:20 -- accel/accel.sh@20 -- # val= 00:07:38.377 21:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.377 21:59:20 -- accel/accel.sh@19 -- # IFS=: 00:07:38.377 21:59:20 -- accel/accel.sh@19 -- # read -r var val 00:07:38.377 21:59:20 -- accel/accel.sh@20 -- # val=0x1 00:07:38.377 21:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.377 21:59:20 -- accel/accel.sh@19 -- # IFS=: 00:07:38.377 21:59:20 -- accel/accel.sh@19 -- # read -r var val 00:07:38.377 21:59:20 -- accel/accel.sh@20 -- # val= 00:07:38.377 21:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.377 21:59:20 -- accel/accel.sh@19 -- # IFS=: 00:07:38.377 21:59:20 -- accel/accel.sh@19 -- # read -r var val 00:07:38.377 21:59:20 -- accel/accel.sh@20 -- # val= 00:07:38.377 21:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.377 21:59:20 -- accel/accel.sh@19 -- # IFS=: 00:07:38.377 21:59:20 -- accel/accel.sh@19 -- # read -r var val 00:07:38.378 21:59:20 -- accel/accel.sh@20 -- # val=xor 00:07:38.378 21:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.378 21:59:20 -- accel/accel.sh@23 -- # accel_opc=xor 00:07:38.378 21:59:20 -- accel/accel.sh@19 -- # IFS=: 00:07:38.378 21:59:20 -- accel/accel.sh@19 -- # read -r var val 00:07:38.378 21:59:20 -- accel/accel.sh@20 -- # val=2 00:07:38.378 21:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.378 21:59:20 -- accel/accel.sh@19 -- # IFS=: 00:07:38.378 21:59:20 -- accel/accel.sh@19 -- # read -r var val 00:07:38.378 21:59:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.378 21:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.378 21:59:20 -- accel/accel.sh@19 -- # IFS=: 00:07:38.378 21:59:20 -- accel/accel.sh@19 -- # read -r var val 00:07:38.378 21:59:20 -- accel/accel.sh@20 -- # val= 00:07:38.378 21:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.378 21:59:20 -- accel/accel.sh@19 -- # IFS=: 00:07:38.378 21:59:20 -- accel/accel.sh@19 -- # read -r var val 00:07:38.378 21:59:20 -- accel/accel.sh@20 -- # val=software 00:07:38.378 21:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.378 21:59:20 -- accel/accel.sh@22 -- # accel_module=software 00:07:38.378 21:59:20 -- accel/accel.sh@19 -- # IFS=: 00:07:38.378 21:59:20 -- accel/accel.sh@19 -- # read -r var val 00:07:38.378 21:59:20 -- accel/accel.sh@20 -- # val=32 00:07:38.378 21:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.378 21:59:20 -- accel/accel.sh@19 -- # IFS=: 00:07:38.378 21:59:20 -- accel/accel.sh@19 -- # read -r var val 00:07:38.378 21:59:20 -- accel/accel.sh@20 -- # val=32 00:07:38.378 21:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.378 21:59:20 -- accel/accel.sh@19 -- # IFS=: 00:07:38.378 21:59:20 -- accel/accel.sh@19 -- # read -r var val 00:07:38.378 21:59:20 -- accel/accel.sh@20 -- # val=1 00:07:38.378 21:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.378 21:59:20 -- accel/accel.sh@19 -- # IFS=: 00:07:38.378 21:59:20 -- accel/accel.sh@19 -- # read -r var val 00:07:38.378 21:59:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.378 21:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.378 21:59:20 -- accel/accel.sh@19 -- # IFS=: 00:07:38.378 21:59:20 -- accel/accel.sh@19 -- # read -r var val 00:07:38.378 21:59:20 -- accel/accel.sh@20 -- # val=Yes 00:07:38.378 21:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.378 21:59:20 -- accel/accel.sh@19 -- # IFS=: 00:07:38.378 21:59:20 -- accel/accel.sh@19 -- # read -r var val 00:07:38.378 21:59:20 -- accel/accel.sh@20 -- # val= 00:07:38.378 21:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.378 21:59:20 -- accel/accel.sh@19 -- # IFS=: 00:07:38.378 21:59:20 -- accel/accel.sh@19 -- # read -r var val 00:07:38.378 21:59:20 -- accel/accel.sh@20 -- # val= 00:07:38.378 21:59:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.378 21:59:20 -- accel/accel.sh@19 -- # IFS=: 00:07:38.378 21:59:20 -- accel/accel.sh@19 -- # read -r var val 00:07:39.749 21:59:21 -- accel/accel.sh@20 -- # val= 00:07:39.749 21:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.749 21:59:21 -- accel/accel.sh@19 -- # IFS=: 00:07:39.749 21:59:21 -- accel/accel.sh@19 -- # read -r var val 00:07:39.749 21:59:21 -- accel/accel.sh@20 -- # val= 00:07:39.749 21:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.749 21:59:21 -- accel/accel.sh@19 -- # IFS=: 00:07:39.749 21:59:21 -- accel/accel.sh@19 -- # read -r var val 00:07:39.749 21:59:21 -- accel/accel.sh@20 -- # val= 00:07:39.749 21:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.749 21:59:21 -- accel/accel.sh@19 -- # IFS=: 00:07:39.749 21:59:21 -- accel/accel.sh@19 -- # read -r var val 00:07:39.749 21:59:21 -- accel/accel.sh@20 -- # val= 00:07:39.749 21:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.749 21:59:21 -- accel/accel.sh@19 -- # IFS=: 00:07:39.749 21:59:21 -- accel/accel.sh@19 -- # read -r var val 00:07:39.749 21:59:21 -- accel/accel.sh@20 -- # val= 00:07:39.749 21:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.749 21:59:21 -- accel/accel.sh@19 -- # IFS=: 00:07:39.749 21:59:21 -- accel/accel.sh@19 -- # read -r var val 00:07:39.749 21:59:21 -- accel/accel.sh@20 -- # val= 00:07:39.749 21:59:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.749 21:59:21 -- accel/accel.sh@19 -- # IFS=: 00:07:39.749 21:59:21 -- accel/accel.sh@19 -- # read -r var val 00:07:39.749 21:59:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.749 21:59:21 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:39.749 21:59:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.749 00:07:39.749 real 0m1.499s 00:07:39.749 user 0m1.348s 00:07:39.749 sys 0m0.152s 00:07:39.749 21:59:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:39.749 21:59:21 -- common/autotest_common.sh@10 -- # set +x 00:07:39.749 ************************************ 00:07:39.749 END TEST accel_xor 00:07:39.749 ************************************ 00:07:39.749 21:59:21 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:39.749 21:59:21 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:39.749 21:59:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:39.749 21:59:21 -- common/autotest_common.sh@10 -- # set +x 00:07:39.749 ************************************ 00:07:39.749 START TEST accel_xor 00:07:39.749 ************************************ 00:07:39.749 21:59:21 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:07:39.749 21:59:21 -- accel/accel.sh@16 -- # local accel_opc 00:07:39.749 21:59:21 -- accel/accel.sh@17 -- # local accel_module 00:07:39.749 21:59:21 -- accel/accel.sh@19 -- # IFS=: 00:07:39.749 21:59:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:39.749 21:59:21 -- accel/accel.sh@19 -- # read -r var val 00:07:39.749 21:59:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:39.749 21:59:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.749 21:59:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.749 21:59:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.749 21:59:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.749 21:59:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.749 21:59:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.749 21:59:21 -- accel/accel.sh@40 -- # local IFS=, 00:07:39.749 21:59:21 -- accel/accel.sh@41 -- # jq -r . 00:07:39.749 [2024-04-24 21:59:21.868604] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:39.749 [2024-04-24 21:59:21.868667] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3848703 ] 00:07:39.749 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.749 [2024-04-24 21:59:21.960734] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.008 [2024-04-24 21:59:22.082117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.008 21:59:22 -- accel/accel.sh@20 -- # val= 00:07:40.008 21:59:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.008 21:59:22 -- accel/accel.sh@19 -- # IFS=: 00:07:40.008 21:59:22 -- accel/accel.sh@19 -- # read -r var val 00:07:40.008 21:59:22 -- accel/accel.sh@20 -- # val= 00:07:40.008 21:59:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.008 21:59:22 -- accel/accel.sh@19 -- # IFS=: 00:07:40.008 21:59:22 -- accel/accel.sh@19 -- # read -r var val 00:07:40.008 21:59:22 -- accel/accel.sh@20 -- # val=0x1 00:07:40.008 21:59:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.008 21:59:22 -- accel/accel.sh@19 -- # IFS=: 00:07:40.008 21:59:22 -- accel/accel.sh@19 -- # read -r var val 00:07:40.008 21:59:22 -- accel/accel.sh@20 -- # val= 00:07:40.008 21:59:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.008 21:59:22 -- accel/accel.sh@19 -- # IFS=: 00:07:40.008 21:59:22 -- accel/accel.sh@19 -- # read -r var val 00:07:40.008 21:59:22 -- accel/accel.sh@20 -- # val= 00:07:40.008 21:59:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.008 21:59:22 -- accel/accel.sh@19 -- # IFS=: 00:07:40.008 21:59:22 -- accel/accel.sh@19 -- # read -r var val 00:07:40.008 21:59:22 -- accel/accel.sh@20 -- # val=xor 00:07:40.008 21:59:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.008 21:59:22 -- accel/accel.sh@23 -- # accel_opc=xor 00:07:40.008 21:59:22 -- accel/accel.sh@19 -- # IFS=: 00:07:40.008 21:59:22 -- accel/accel.sh@19 -- # read -r var val 00:07:40.008 21:59:22 -- accel/accel.sh@20 -- # val=3 00:07:40.008 21:59:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.008 21:59:22 -- accel/accel.sh@19 -- # IFS=: 00:07:40.008 21:59:22 -- accel/accel.sh@19 -- # read -r var val 00:07:40.008 21:59:22 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.008 21:59:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.008 21:59:22 -- accel/accel.sh@19 -- # IFS=: 00:07:40.008 21:59:22 -- accel/accel.sh@19 -- # read -r var val 00:07:40.008 21:59:22 -- accel/accel.sh@20 -- # val= 00:07:40.008 21:59:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.008 21:59:22 -- accel/accel.sh@19 -- # IFS=: 00:07:40.008 21:59:22 -- accel/accel.sh@19 -- # read -r var val 00:07:40.008 21:59:22 -- accel/accel.sh@20 -- # val=software 00:07:40.008 21:59:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.008 21:59:22 -- accel/accel.sh@22 -- # accel_module=software 00:07:40.008 21:59:22 -- accel/accel.sh@19 -- # IFS=: 00:07:40.008 21:59:22 -- accel/accel.sh@19 -- # read -r var val 00:07:40.008 21:59:22 -- accel/accel.sh@20 -- # val=32 00:07:40.008 21:59:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.008 21:59:22 -- accel/accel.sh@19 -- # IFS=: 00:07:40.009 21:59:22 -- accel/accel.sh@19 -- # read -r var val 00:07:40.009 21:59:22 -- accel/accel.sh@20 -- # val=32 00:07:40.009 21:59:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.009 21:59:22 -- accel/accel.sh@19 -- # IFS=: 00:07:40.009 21:59:22 -- accel/accel.sh@19 -- # read -r var val 00:07:40.009 21:59:22 -- accel/accel.sh@20 -- # val=1 00:07:40.009 21:59:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.009 21:59:22 -- accel/accel.sh@19 -- # IFS=: 00:07:40.009 21:59:22 -- accel/accel.sh@19 -- # read -r var val 00:07:40.009 21:59:22 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.009 21:59:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.009 21:59:22 -- accel/accel.sh@19 -- # IFS=: 00:07:40.009 21:59:22 -- accel/accel.sh@19 -- # read -r var val 00:07:40.009 21:59:22 -- accel/accel.sh@20 -- # val=Yes 00:07:40.009 21:59:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.009 21:59:22 -- accel/accel.sh@19 -- # IFS=: 00:07:40.009 21:59:22 -- accel/accel.sh@19 -- # read -r var val 00:07:40.009 21:59:22 -- accel/accel.sh@20 -- # val= 00:07:40.009 21:59:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.009 21:59:22 -- accel/accel.sh@19 -- # IFS=: 00:07:40.009 21:59:22 -- accel/accel.sh@19 -- # read -r var val 00:07:40.009 21:59:22 -- accel/accel.sh@20 -- # val= 00:07:40.009 21:59:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.009 21:59:22 -- accel/accel.sh@19 -- # IFS=: 00:07:40.009 21:59:22 -- accel/accel.sh@19 -- # read -r var val 00:07:41.381 21:59:23 -- accel/accel.sh@20 -- # val= 00:07:41.381 21:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.381 21:59:23 -- accel/accel.sh@19 -- # IFS=: 00:07:41.381 21:59:23 -- accel/accel.sh@19 -- # read -r var val 00:07:41.381 21:59:23 -- accel/accel.sh@20 -- # val= 00:07:41.381 21:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.381 21:59:23 -- accel/accel.sh@19 -- # IFS=: 00:07:41.381 21:59:23 -- accel/accel.sh@19 -- # read -r var val 00:07:41.381 21:59:23 -- accel/accel.sh@20 -- # val= 00:07:41.381 21:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.381 21:59:23 -- accel/accel.sh@19 -- # IFS=: 00:07:41.381 21:59:23 -- accel/accel.sh@19 -- # read -r var val 00:07:41.381 21:59:23 -- accel/accel.sh@20 -- # val= 00:07:41.381 21:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.381 21:59:23 -- accel/accel.sh@19 -- # IFS=: 00:07:41.381 21:59:23 -- accel/accel.sh@19 -- # read -r var val 00:07:41.381 21:59:23 -- accel/accel.sh@20 -- # val= 00:07:41.381 21:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.381 21:59:23 -- accel/accel.sh@19 -- # IFS=: 00:07:41.381 21:59:23 -- accel/accel.sh@19 -- # read -r var val 00:07:41.381 21:59:23 -- accel/accel.sh@20 -- # val= 00:07:41.382 21:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.382 21:59:23 -- accel/accel.sh@19 -- # IFS=: 00:07:41.382 21:59:23 -- accel/accel.sh@19 -- # read -r var val 00:07:41.382 21:59:23 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.382 21:59:23 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:41.382 21:59:23 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.382 00:07:41.382 real 0m1.526s 00:07:41.382 user 0m1.352s 00:07:41.382 sys 0m0.174s 00:07:41.382 21:59:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:41.382 21:59:23 -- common/autotest_common.sh@10 -- # set +x 00:07:41.382 ************************************ 00:07:41.382 END TEST accel_xor 00:07:41.382 ************************************ 00:07:41.382 21:59:23 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:41.382 21:59:23 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:41.382 21:59:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:41.382 21:59:23 -- common/autotest_common.sh@10 -- # set +x 00:07:41.382 ************************************ 00:07:41.382 START TEST accel_dif_verify 00:07:41.382 ************************************ 00:07:41.382 21:59:23 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:07:41.382 21:59:23 -- accel/accel.sh@16 -- # local accel_opc 00:07:41.382 21:59:23 -- accel/accel.sh@17 -- # local accel_module 00:07:41.382 21:59:23 -- accel/accel.sh@19 -- # IFS=: 00:07:41.382 21:59:23 -- accel/accel.sh@19 -- # read -r var val 00:07:41.382 21:59:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:41.382 21:59:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:41.382 21:59:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.382 21:59:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.382 21:59:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.382 21:59:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.382 21:59:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.382 21:59:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.382 21:59:23 -- accel/accel.sh@40 -- # local IFS=, 00:07:41.382 21:59:23 -- accel/accel.sh@41 -- # jq -r . 00:07:41.382 [2024-04-24 21:59:23.514130] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:41.382 [2024-04-24 21:59:23.514227] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3848872 ] 00:07:41.382 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.382 [2024-04-24 21:59:23.589538] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.640 [2024-04-24 21:59:23.711819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.640 21:59:23 -- accel/accel.sh@20 -- # val= 00:07:41.640 21:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # IFS=: 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # read -r var val 00:07:41.640 21:59:23 -- accel/accel.sh@20 -- # val= 00:07:41.640 21:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # IFS=: 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # read -r var val 00:07:41.640 21:59:23 -- accel/accel.sh@20 -- # val=0x1 00:07:41.640 21:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # IFS=: 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # read -r var val 00:07:41.640 21:59:23 -- accel/accel.sh@20 -- # val= 00:07:41.640 21:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # IFS=: 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # read -r var val 00:07:41.640 21:59:23 -- accel/accel.sh@20 -- # val= 00:07:41.640 21:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # IFS=: 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # read -r var val 00:07:41.640 21:59:23 -- accel/accel.sh@20 -- # val=dif_verify 00:07:41.640 21:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.640 21:59:23 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # IFS=: 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # read -r var val 00:07:41.640 21:59:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.640 21:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # IFS=: 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # read -r var val 00:07:41.640 21:59:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.640 21:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # IFS=: 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # read -r var val 00:07:41.640 21:59:23 -- accel/accel.sh@20 -- # val='512 bytes' 00:07:41.640 21:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # IFS=: 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # read -r var val 00:07:41.640 21:59:23 -- accel/accel.sh@20 -- # val='8 bytes' 00:07:41.640 21:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # IFS=: 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # read -r var val 00:07:41.640 21:59:23 -- accel/accel.sh@20 -- # val= 00:07:41.640 21:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # IFS=: 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # read -r var val 00:07:41.640 21:59:23 -- accel/accel.sh@20 -- # val=software 00:07:41.640 21:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.640 21:59:23 -- accel/accel.sh@22 -- # accel_module=software 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # IFS=: 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # read -r var val 00:07:41.640 21:59:23 -- accel/accel.sh@20 -- # val=32 00:07:41.640 21:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # IFS=: 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # read -r var val 00:07:41.640 21:59:23 -- accel/accel.sh@20 -- # val=32 00:07:41.640 21:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # IFS=: 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # read -r var val 00:07:41.640 21:59:23 -- accel/accel.sh@20 -- # val=1 00:07:41.640 21:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # IFS=: 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # read -r var val 00:07:41.640 21:59:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.640 21:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # IFS=: 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # read -r var val 00:07:41.640 21:59:23 -- accel/accel.sh@20 -- # val=No 00:07:41.640 21:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # IFS=: 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # read -r var val 00:07:41.640 21:59:23 -- accel/accel.sh@20 -- # val= 00:07:41.640 21:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # IFS=: 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # read -r var val 00:07:41.640 21:59:23 -- accel/accel.sh@20 -- # val= 00:07:41.640 21:59:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # IFS=: 00:07:41.640 21:59:23 -- accel/accel.sh@19 -- # read -r var val 00:07:43.012 21:59:24 -- accel/accel.sh@20 -- # val= 00:07:43.012 21:59:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.012 21:59:24 -- accel/accel.sh@19 -- # IFS=: 00:07:43.012 21:59:24 -- accel/accel.sh@19 -- # read -r var val 00:07:43.012 21:59:24 -- accel/accel.sh@20 -- # val= 00:07:43.012 21:59:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.012 21:59:24 -- accel/accel.sh@19 -- # IFS=: 00:07:43.012 21:59:24 -- accel/accel.sh@19 -- # read -r var val 00:07:43.012 21:59:24 -- accel/accel.sh@20 -- # val= 00:07:43.012 21:59:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.012 21:59:24 -- accel/accel.sh@19 -- # IFS=: 00:07:43.012 21:59:24 -- accel/accel.sh@19 -- # read -r var val 00:07:43.012 21:59:24 -- accel/accel.sh@20 -- # val= 00:07:43.012 21:59:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.012 21:59:24 -- accel/accel.sh@19 -- # IFS=: 00:07:43.012 21:59:24 -- accel/accel.sh@19 -- # read -r var val 00:07:43.012 21:59:24 -- accel/accel.sh@20 -- # val= 00:07:43.012 21:59:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.012 21:59:24 -- accel/accel.sh@19 -- # IFS=: 00:07:43.012 21:59:24 -- accel/accel.sh@19 -- # read -r var val 00:07:43.012 21:59:24 -- accel/accel.sh@20 -- # val= 00:07:43.012 21:59:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.012 21:59:24 -- accel/accel.sh@19 -- # IFS=: 00:07:43.012 21:59:24 -- accel/accel.sh@19 -- # read -r var val 00:07:43.012 21:59:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.012 21:59:24 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:43.012 21:59:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.012 00:07:43.012 real 0m1.507s 00:07:43.012 user 0m1.350s 00:07:43.012 sys 0m0.159s 00:07:43.012 21:59:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:43.012 21:59:24 -- common/autotest_common.sh@10 -- # set +x 00:07:43.012 ************************************ 00:07:43.012 END TEST accel_dif_verify 00:07:43.012 ************************************ 00:07:43.012 21:59:25 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:43.012 21:59:25 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:43.012 21:59:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.012 21:59:25 -- common/autotest_common.sh@10 -- # set +x 00:07:43.012 ************************************ 00:07:43.012 START TEST accel_dif_generate 00:07:43.012 ************************************ 00:07:43.012 21:59:25 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:07:43.012 21:59:25 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.012 21:59:25 -- accel/accel.sh@17 -- # local accel_module 00:07:43.012 21:59:25 -- accel/accel.sh@19 -- # IFS=: 00:07:43.012 21:59:25 -- accel/accel.sh@19 -- # read -r var val 00:07:43.012 21:59:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:43.012 21:59:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:43.012 21:59:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.012 21:59:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.012 21:59:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.012 21:59:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.012 21:59:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.012 21:59:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.012 21:59:25 -- accel/accel.sh@40 -- # local IFS=, 00:07:43.012 21:59:25 -- accel/accel.sh@41 -- # jq -r . 00:07:43.012 [2024-04-24 21:59:25.135645] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:43.012 [2024-04-24 21:59:25.135705] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3849151 ] 00:07:43.012 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.012 [2024-04-24 21:59:25.210978] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.271 [2024-04-24 21:59:25.332701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.271 21:59:25 -- accel/accel.sh@20 -- # val= 00:07:43.271 21:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # IFS=: 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # read -r var val 00:07:43.271 21:59:25 -- accel/accel.sh@20 -- # val= 00:07:43.271 21:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # IFS=: 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # read -r var val 00:07:43.271 21:59:25 -- accel/accel.sh@20 -- # val=0x1 00:07:43.271 21:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # IFS=: 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # read -r var val 00:07:43.271 21:59:25 -- accel/accel.sh@20 -- # val= 00:07:43.271 21:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # IFS=: 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # read -r var val 00:07:43.271 21:59:25 -- accel/accel.sh@20 -- # val= 00:07:43.271 21:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # IFS=: 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # read -r var val 00:07:43.271 21:59:25 -- accel/accel.sh@20 -- # val=dif_generate 00:07:43.271 21:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.271 21:59:25 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # IFS=: 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # read -r var val 00:07:43.271 21:59:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:43.271 21:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # IFS=: 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # read -r var val 00:07:43.271 21:59:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:43.271 21:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # IFS=: 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # read -r var val 00:07:43.271 21:59:25 -- accel/accel.sh@20 -- # val='512 bytes' 00:07:43.271 21:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # IFS=: 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # read -r var val 00:07:43.271 21:59:25 -- accel/accel.sh@20 -- # val='8 bytes' 00:07:43.271 21:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # IFS=: 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # read -r var val 00:07:43.271 21:59:25 -- accel/accel.sh@20 -- # val= 00:07:43.271 21:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # IFS=: 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # read -r var val 00:07:43.271 21:59:25 -- accel/accel.sh@20 -- # val=software 00:07:43.271 21:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.271 21:59:25 -- accel/accel.sh@22 -- # accel_module=software 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # IFS=: 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # read -r var val 00:07:43.271 21:59:25 -- accel/accel.sh@20 -- # val=32 00:07:43.271 21:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # IFS=: 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # read -r var val 00:07:43.271 21:59:25 -- accel/accel.sh@20 -- # val=32 00:07:43.271 21:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # IFS=: 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # read -r var val 00:07:43.271 21:59:25 -- accel/accel.sh@20 -- # val=1 00:07:43.271 21:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # IFS=: 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # read -r var val 00:07:43.271 21:59:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:43.271 21:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # IFS=: 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # read -r var val 00:07:43.271 21:59:25 -- accel/accel.sh@20 -- # val=No 00:07:43.271 21:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # IFS=: 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # read -r var val 00:07:43.271 21:59:25 -- accel/accel.sh@20 -- # val= 00:07:43.271 21:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # IFS=: 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # read -r var val 00:07:43.271 21:59:25 -- accel/accel.sh@20 -- # val= 00:07:43.271 21:59:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # IFS=: 00:07:43.271 21:59:25 -- accel/accel.sh@19 -- # read -r var val 00:07:44.641 21:59:26 -- accel/accel.sh@20 -- # val= 00:07:44.641 21:59:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.641 21:59:26 -- accel/accel.sh@19 -- # IFS=: 00:07:44.641 21:59:26 -- accel/accel.sh@19 -- # read -r var val 00:07:44.641 21:59:26 -- accel/accel.sh@20 -- # val= 00:07:44.641 21:59:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.641 21:59:26 -- accel/accel.sh@19 -- # IFS=: 00:07:44.641 21:59:26 -- accel/accel.sh@19 -- # read -r var val 00:07:44.641 21:59:26 -- accel/accel.sh@20 -- # val= 00:07:44.641 21:59:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.641 21:59:26 -- accel/accel.sh@19 -- # IFS=: 00:07:44.641 21:59:26 -- accel/accel.sh@19 -- # read -r var val 00:07:44.641 21:59:26 -- accel/accel.sh@20 -- # val= 00:07:44.641 21:59:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.641 21:59:26 -- accel/accel.sh@19 -- # IFS=: 00:07:44.641 21:59:26 -- accel/accel.sh@19 -- # read -r var val 00:07:44.641 21:59:26 -- accel/accel.sh@20 -- # val= 00:07:44.641 21:59:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.641 21:59:26 -- accel/accel.sh@19 -- # IFS=: 00:07:44.641 21:59:26 -- accel/accel.sh@19 -- # read -r var val 00:07:44.641 21:59:26 -- accel/accel.sh@20 -- # val= 00:07:44.641 21:59:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.641 21:59:26 -- accel/accel.sh@19 -- # IFS=: 00:07:44.641 21:59:26 -- accel/accel.sh@19 -- # read -r var val 00:07:44.641 21:59:26 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.641 21:59:26 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:44.641 21:59:26 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.641 00:07:44.641 real 0m1.499s 00:07:44.641 user 0m1.348s 00:07:44.641 sys 0m0.154s 00:07:44.641 21:59:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:44.641 21:59:26 -- common/autotest_common.sh@10 -- # set +x 00:07:44.642 ************************************ 00:07:44.642 END TEST accel_dif_generate 00:07:44.642 ************************************ 00:07:44.642 21:59:26 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:44.642 21:59:26 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:44.642 21:59:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:44.642 21:59:26 -- common/autotest_common.sh@10 -- # set +x 00:07:44.642 ************************************ 00:07:44.642 START TEST accel_dif_generate_copy 00:07:44.642 ************************************ 00:07:44.642 21:59:26 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:07:44.642 21:59:26 -- accel/accel.sh@16 -- # local accel_opc 00:07:44.642 21:59:26 -- accel/accel.sh@17 -- # local accel_module 00:07:44.642 21:59:26 -- accel/accel.sh@19 -- # IFS=: 00:07:44.642 21:59:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:44.642 21:59:26 -- accel/accel.sh@19 -- # read -r var val 00:07:44.642 21:59:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:44.642 21:59:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.642 21:59:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.642 21:59:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.642 21:59:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.642 21:59:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.642 21:59:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.642 21:59:26 -- accel/accel.sh@40 -- # local IFS=, 00:07:44.642 21:59:26 -- accel/accel.sh@41 -- # jq -r . 00:07:44.642 [2024-04-24 21:59:26.771475] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:44.642 [2024-04-24 21:59:26.771540] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3849310 ] 00:07:44.642 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.642 [2024-04-24 21:59:26.865131] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.900 [2024-04-24 21:59:26.987105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.900 21:59:27 -- accel/accel.sh@20 -- # val= 00:07:44.900 21:59:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # IFS=: 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # read -r var val 00:07:44.900 21:59:27 -- accel/accel.sh@20 -- # val= 00:07:44.900 21:59:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # IFS=: 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # read -r var val 00:07:44.900 21:59:27 -- accel/accel.sh@20 -- # val=0x1 00:07:44.900 21:59:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # IFS=: 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # read -r var val 00:07:44.900 21:59:27 -- accel/accel.sh@20 -- # val= 00:07:44.900 21:59:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # IFS=: 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # read -r var val 00:07:44.900 21:59:27 -- accel/accel.sh@20 -- # val= 00:07:44.900 21:59:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # IFS=: 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # read -r var val 00:07:44.900 21:59:27 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:44.900 21:59:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.900 21:59:27 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # IFS=: 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # read -r var val 00:07:44.900 21:59:27 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.900 21:59:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # IFS=: 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # read -r var val 00:07:44.900 21:59:27 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.900 21:59:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # IFS=: 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # read -r var val 00:07:44.900 21:59:27 -- accel/accel.sh@20 -- # val= 00:07:44.900 21:59:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # IFS=: 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # read -r var val 00:07:44.900 21:59:27 -- accel/accel.sh@20 -- # val=software 00:07:44.900 21:59:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.900 21:59:27 -- accel/accel.sh@22 -- # accel_module=software 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # IFS=: 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # read -r var val 00:07:44.900 21:59:27 -- accel/accel.sh@20 -- # val=32 00:07:44.900 21:59:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # IFS=: 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # read -r var val 00:07:44.900 21:59:27 -- accel/accel.sh@20 -- # val=32 00:07:44.900 21:59:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # IFS=: 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # read -r var val 00:07:44.900 21:59:27 -- accel/accel.sh@20 -- # val=1 00:07:44.900 21:59:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # IFS=: 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # read -r var val 00:07:44.900 21:59:27 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.900 21:59:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # IFS=: 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # read -r var val 00:07:44.900 21:59:27 -- accel/accel.sh@20 -- # val=No 00:07:44.900 21:59:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # IFS=: 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # read -r var val 00:07:44.900 21:59:27 -- accel/accel.sh@20 -- # val= 00:07:44.900 21:59:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # IFS=: 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # read -r var val 00:07:44.900 21:59:27 -- accel/accel.sh@20 -- # val= 00:07:44.900 21:59:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # IFS=: 00:07:44.900 21:59:27 -- accel/accel.sh@19 -- # read -r var val 00:07:46.270 21:59:28 -- accel/accel.sh@20 -- # val= 00:07:46.270 21:59:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.270 21:59:28 -- accel/accel.sh@19 -- # IFS=: 00:07:46.270 21:59:28 -- accel/accel.sh@19 -- # read -r var val 00:07:46.270 21:59:28 -- accel/accel.sh@20 -- # val= 00:07:46.270 21:59:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.270 21:59:28 -- accel/accel.sh@19 -- # IFS=: 00:07:46.270 21:59:28 -- accel/accel.sh@19 -- # read -r var val 00:07:46.270 21:59:28 -- accel/accel.sh@20 -- # val= 00:07:46.270 21:59:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.270 21:59:28 -- accel/accel.sh@19 -- # IFS=: 00:07:46.270 21:59:28 -- accel/accel.sh@19 -- # read -r var val 00:07:46.270 21:59:28 -- accel/accel.sh@20 -- # val= 00:07:46.270 21:59:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.270 21:59:28 -- accel/accel.sh@19 -- # IFS=: 00:07:46.270 21:59:28 -- accel/accel.sh@19 -- # read -r var val 00:07:46.270 21:59:28 -- accel/accel.sh@20 -- # val= 00:07:46.270 21:59:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.270 21:59:28 -- accel/accel.sh@19 -- # IFS=: 00:07:46.270 21:59:28 -- accel/accel.sh@19 -- # read -r var val 00:07:46.270 21:59:28 -- accel/accel.sh@20 -- # val= 00:07:46.270 21:59:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.270 21:59:28 -- accel/accel.sh@19 -- # IFS=: 00:07:46.270 21:59:28 -- accel/accel.sh@19 -- # read -r var val 00:07:46.270 21:59:28 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:46.270 21:59:28 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:46.270 21:59:28 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.270 00:07:46.270 real 0m1.522s 00:07:46.270 user 0m1.355s 00:07:46.270 sys 0m0.167s 00:07:46.270 21:59:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:46.270 21:59:28 -- common/autotest_common.sh@10 -- # set +x 00:07:46.270 ************************************ 00:07:46.270 END TEST accel_dif_generate_copy 00:07:46.270 ************************************ 00:07:46.270 21:59:28 -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:46.270 21:59:28 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:46.270 21:59:28 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:46.270 21:59:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.270 21:59:28 -- common/autotest_common.sh@10 -- # set +x 00:07:46.270 ************************************ 00:07:46.270 START TEST accel_comp 00:07:46.270 ************************************ 00:07:46.270 21:59:28 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:46.270 21:59:28 -- accel/accel.sh@16 -- # local accel_opc 00:07:46.270 21:59:28 -- accel/accel.sh@17 -- # local accel_module 00:07:46.270 21:59:28 -- accel/accel.sh@19 -- # IFS=: 00:07:46.270 21:59:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:46.270 21:59:28 -- accel/accel.sh@19 -- # read -r var val 00:07:46.270 21:59:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:46.270 21:59:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:46.270 21:59:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.270 21:59:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.270 21:59:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.270 21:59:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.270 21:59:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.270 21:59:28 -- accel/accel.sh@40 -- # local IFS=, 00:07:46.270 21:59:28 -- accel/accel.sh@41 -- # jq -r . 00:07:46.270 [2024-04-24 21:59:28.413370] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:46.270 [2024-04-24 21:59:28.413449] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3849597 ] 00:07:46.270 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.270 [2024-04-24 21:59:28.489478] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.528 [2024-04-24 21:59:28.610881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.528 21:59:28 -- accel/accel.sh@20 -- # val= 00:07:46.528 21:59:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.528 21:59:28 -- accel/accel.sh@19 -- # IFS=: 00:07:46.528 21:59:28 -- accel/accel.sh@19 -- # read -r var val 00:07:46.528 21:59:28 -- accel/accel.sh@20 -- # val= 00:07:46.528 21:59:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.528 21:59:28 -- accel/accel.sh@19 -- # IFS=: 00:07:46.528 21:59:28 -- accel/accel.sh@19 -- # read -r var val 00:07:46.528 21:59:28 -- accel/accel.sh@20 -- # val= 00:07:46.528 21:59:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.528 21:59:28 -- accel/accel.sh@19 -- # IFS=: 00:07:46.528 21:59:28 -- accel/accel.sh@19 -- # read -r var val 00:07:46.528 21:59:28 -- accel/accel.sh@20 -- # val=0x1 00:07:46.528 21:59:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.528 21:59:28 -- accel/accel.sh@19 -- # IFS=: 00:07:46.528 21:59:28 -- accel/accel.sh@19 -- # read -r var val 00:07:46.528 21:59:28 -- accel/accel.sh@20 -- # val= 00:07:46.528 21:59:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.528 21:59:28 -- accel/accel.sh@19 -- # IFS=: 00:07:46.528 21:59:28 -- accel/accel.sh@19 -- # read -r var val 00:07:46.528 21:59:28 -- accel/accel.sh@20 -- # val= 00:07:46.528 21:59:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.528 21:59:28 -- accel/accel.sh@19 -- # IFS=: 00:07:46.528 21:59:28 -- accel/accel.sh@19 -- # read -r var val 00:07:46.528 21:59:28 -- accel/accel.sh@20 -- # val=compress 00:07:46.528 21:59:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.528 21:59:28 -- accel/accel.sh@23 -- # accel_opc=compress 00:07:46.528 21:59:28 -- accel/accel.sh@19 -- # IFS=: 00:07:46.528 21:59:28 -- accel/accel.sh@19 -- # read -r var val 00:07:46.528 21:59:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:46.528 21:59:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.528 21:59:28 -- accel/accel.sh@19 -- # IFS=: 00:07:46.528 21:59:28 -- accel/accel.sh@19 -- # read -r var val 00:07:46.528 21:59:28 -- accel/accel.sh@20 -- # val= 00:07:46.528 21:59:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.528 21:59:28 -- accel/accel.sh@19 -- # IFS=: 00:07:46.528 21:59:28 -- accel/accel.sh@19 -- # read -r var val 00:07:46.528 21:59:28 -- accel/accel.sh@20 -- # val=software 00:07:46.528 21:59:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.528 21:59:28 -- accel/accel.sh@22 -- # accel_module=software 00:07:46.528 21:59:28 -- accel/accel.sh@19 -- # IFS=: 00:07:46.528 21:59:28 -- accel/accel.sh@19 -- # read -r var val 00:07:46.528 21:59:28 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:46.528 21:59:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.528 21:59:28 -- accel/accel.sh@19 -- # IFS=: 00:07:46.528 21:59:28 -- accel/accel.sh@19 -- # read -r var val 00:07:46.528 21:59:28 -- accel/accel.sh@20 -- # val=32 00:07:46.528 21:59:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.528 21:59:28 -- accel/accel.sh@19 -- # IFS=: 00:07:46.528 21:59:28 -- accel/accel.sh@19 -- # read -r var val 00:07:46.528 21:59:28 -- accel/accel.sh@20 -- # val=32 00:07:46.528 21:59:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.528 21:59:28 -- accel/accel.sh@19 -- # IFS=: 00:07:46.528 21:59:28 -- accel/accel.sh@19 -- # read -r var val 00:07:46.528 21:59:28 -- accel/accel.sh@20 -- # val=1 00:07:46.528 21:59:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.528 21:59:28 -- accel/accel.sh@19 -- # IFS=: 00:07:46.529 21:59:28 -- accel/accel.sh@19 -- # read -r var val 00:07:46.529 21:59:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.529 21:59:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.529 21:59:28 -- accel/accel.sh@19 -- # IFS=: 00:07:46.529 21:59:28 -- accel/accel.sh@19 -- # read -r var val 00:07:46.529 21:59:28 -- accel/accel.sh@20 -- # val=No 00:07:46.529 21:59:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.529 21:59:28 -- accel/accel.sh@19 -- # IFS=: 00:07:46.529 21:59:28 -- accel/accel.sh@19 -- # read -r var val 00:07:46.529 21:59:28 -- accel/accel.sh@20 -- # val= 00:07:46.529 21:59:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.529 21:59:28 -- accel/accel.sh@19 -- # IFS=: 00:07:46.529 21:59:28 -- accel/accel.sh@19 -- # read -r var val 00:07:46.529 21:59:28 -- accel/accel.sh@20 -- # val= 00:07:46.529 21:59:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.529 21:59:28 -- accel/accel.sh@19 -- # IFS=: 00:07:46.529 21:59:28 -- accel/accel.sh@19 -- # read -r var val 00:07:47.902 21:59:29 -- accel/accel.sh@20 -- # val= 00:07:47.902 21:59:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.902 21:59:29 -- accel/accel.sh@19 -- # IFS=: 00:07:47.902 21:59:29 -- accel/accel.sh@19 -- # read -r var val 00:07:47.902 21:59:29 -- accel/accel.sh@20 -- # val= 00:07:47.902 21:59:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.902 21:59:29 -- accel/accel.sh@19 -- # IFS=: 00:07:47.902 21:59:29 -- accel/accel.sh@19 -- # read -r var val 00:07:47.902 21:59:29 -- accel/accel.sh@20 -- # val= 00:07:47.902 21:59:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.902 21:59:29 -- accel/accel.sh@19 -- # IFS=: 00:07:47.902 21:59:29 -- accel/accel.sh@19 -- # read -r var val 00:07:47.902 21:59:29 -- accel/accel.sh@20 -- # val= 00:07:47.902 21:59:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.902 21:59:29 -- accel/accel.sh@19 -- # IFS=: 00:07:47.902 21:59:29 -- accel/accel.sh@19 -- # read -r var val 00:07:47.902 21:59:29 -- accel/accel.sh@20 -- # val= 00:07:47.902 21:59:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.902 21:59:29 -- accel/accel.sh@19 -- # IFS=: 00:07:47.902 21:59:29 -- accel/accel.sh@19 -- # read -r var val 00:07:47.902 21:59:29 -- accel/accel.sh@20 -- # val= 00:07:47.902 21:59:29 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.902 21:59:29 -- accel/accel.sh@19 -- # IFS=: 00:07:47.902 21:59:29 -- accel/accel.sh@19 -- # read -r var val 00:07:47.902 21:59:29 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:47.902 21:59:29 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:47.902 21:59:29 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.902 00:07:47.902 real 0m1.504s 00:07:47.902 user 0m1.351s 00:07:47.902 sys 0m0.156s 00:07:47.902 21:59:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:47.902 21:59:29 -- common/autotest_common.sh@10 -- # set +x 00:07:47.902 ************************************ 00:07:47.902 END TEST accel_comp 00:07:47.902 ************************************ 00:07:47.902 21:59:29 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:47.902 21:59:29 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:47.902 21:59:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.902 21:59:29 -- common/autotest_common.sh@10 -- # set +x 00:07:47.902 ************************************ 00:07:47.902 START TEST accel_decomp 00:07:47.902 ************************************ 00:07:47.902 21:59:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:47.902 21:59:30 -- accel/accel.sh@16 -- # local accel_opc 00:07:47.902 21:59:30 -- accel/accel.sh@17 -- # local accel_module 00:07:47.902 21:59:30 -- accel/accel.sh@19 -- # IFS=: 00:07:47.902 21:59:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:47.902 21:59:30 -- accel/accel.sh@19 -- # read -r var val 00:07:47.902 21:59:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:47.902 21:59:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.902 21:59:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.902 21:59:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.902 21:59:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.902 21:59:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.902 21:59:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.902 21:59:30 -- accel/accel.sh@40 -- # local IFS=, 00:07:47.902 21:59:30 -- accel/accel.sh@41 -- # jq -r . 00:07:47.902 [2024-04-24 21:59:30.039155] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:47.903 [2024-04-24 21:59:30.039226] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3849760 ] 00:07:47.903 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.903 [2024-04-24 21:59:30.110224] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.161 [2024-04-24 21:59:30.232375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.161 21:59:30 -- accel/accel.sh@20 -- # val= 00:07:48.161 21:59:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.161 21:59:30 -- accel/accel.sh@19 -- # IFS=: 00:07:48.161 21:59:30 -- accel/accel.sh@19 -- # read -r var val 00:07:48.161 21:59:30 -- accel/accel.sh@20 -- # val= 00:07:48.161 21:59:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.161 21:59:30 -- accel/accel.sh@19 -- # IFS=: 00:07:48.161 21:59:30 -- accel/accel.sh@19 -- # read -r var val 00:07:48.161 21:59:30 -- accel/accel.sh@20 -- # val= 00:07:48.161 21:59:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.161 21:59:30 -- accel/accel.sh@19 -- # IFS=: 00:07:48.161 21:59:30 -- accel/accel.sh@19 -- # read -r var val 00:07:48.161 21:59:30 -- accel/accel.sh@20 -- # val=0x1 00:07:48.161 21:59:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.161 21:59:30 -- accel/accel.sh@19 -- # IFS=: 00:07:48.161 21:59:30 -- accel/accel.sh@19 -- # read -r var val 00:07:48.161 21:59:30 -- accel/accel.sh@20 -- # val= 00:07:48.161 21:59:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.161 21:59:30 -- accel/accel.sh@19 -- # IFS=: 00:07:48.161 21:59:30 -- accel/accel.sh@19 -- # read -r var val 00:07:48.161 21:59:30 -- accel/accel.sh@20 -- # val= 00:07:48.161 21:59:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.161 21:59:30 -- accel/accel.sh@19 -- # IFS=: 00:07:48.161 21:59:30 -- accel/accel.sh@19 -- # read -r var val 00:07:48.161 21:59:30 -- accel/accel.sh@20 -- # val=decompress 00:07:48.161 21:59:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.161 21:59:30 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:48.161 21:59:30 -- accel/accel.sh@19 -- # IFS=: 00:07:48.161 21:59:30 -- accel/accel.sh@19 -- # read -r var val 00:07:48.161 21:59:30 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:48.161 21:59:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.161 21:59:30 -- accel/accel.sh@19 -- # IFS=: 00:07:48.161 21:59:30 -- accel/accel.sh@19 -- # read -r var val 00:07:48.161 21:59:30 -- accel/accel.sh@20 -- # val= 00:07:48.161 21:59:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.161 21:59:30 -- accel/accel.sh@19 -- # IFS=: 00:07:48.161 21:59:30 -- accel/accel.sh@19 -- # read -r var val 00:07:48.161 21:59:30 -- accel/accel.sh@20 -- # val=software 00:07:48.162 21:59:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.162 21:59:30 -- accel/accel.sh@22 -- # accel_module=software 00:07:48.162 21:59:30 -- accel/accel.sh@19 -- # IFS=: 00:07:48.162 21:59:30 -- accel/accel.sh@19 -- # read -r var val 00:07:48.162 21:59:30 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:48.162 21:59:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.162 21:59:30 -- accel/accel.sh@19 -- # IFS=: 00:07:48.162 21:59:30 -- accel/accel.sh@19 -- # read -r var val 00:07:48.162 21:59:30 -- accel/accel.sh@20 -- # val=32 00:07:48.162 21:59:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.162 21:59:30 -- accel/accel.sh@19 -- # IFS=: 00:07:48.162 21:59:30 -- accel/accel.sh@19 -- # read -r var val 00:07:48.162 21:59:30 -- accel/accel.sh@20 -- # val=32 00:07:48.162 21:59:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.162 21:59:30 -- accel/accel.sh@19 -- # IFS=: 00:07:48.162 21:59:30 -- accel/accel.sh@19 -- # read -r var val 00:07:48.162 21:59:30 -- accel/accel.sh@20 -- # val=1 00:07:48.162 21:59:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.162 21:59:30 -- accel/accel.sh@19 -- # IFS=: 00:07:48.162 21:59:30 -- accel/accel.sh@19 -- # read -r var val 00:07:48.162 21:59:30 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:48.162 21:59:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.162 21:59:30 -- accel/accel.sh@19 -- # IFS=: 00:07:48.162 21:59:30 -- accel/accel.sh@19 -- # read -r var val 00:07:48.162 21:59:30 -- accel/accel.sh@20 -- # val=Yes 00:07:48.162 21:59:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.162 21:59:30 -- accel/accel.sh@19 -- # IFS=: 00:07:48.162 21:59:30 -- accel/accel.sh@19 -- # read -r var val 00:07:48.162 21:59:30 -- accel/accel.sh@20 -- # val= 00:07:48.162 21:59:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.162 21:59:30 -- accel/accel.sh@19 -- # IFS=: 00:07:48.162 21:59:30 -- accel/accel.sh@19 -- # read -r var val 00:07:48.162 21:59:30 -- accel/accel.sh@20 -- # val= 00:07:48.162 21:59:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.162 21:59:30 -- accel/accel.sh@19 -- # IFS=: 00:07:48.162 21:59:30 -- accel/accel.sh@19 -- # read -r var val 00:07:49.534 21:59:31 -- accel/accel.sh@20 -- # val= 00:07:49.534 21:59:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.534 21:59:31 -- accel/accel.sh@19 -- # IFS=: 00:07:49.534 21:59:31 -- accel/accel.sh@19 -- # read -r var val 00:07:49.534 21:59:31 -- accel/accel.sh@20 -- # val= 00:07:49.534 21:59:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.534 21:59:31 -- accel/accel.sh@19 -- # IFS=: 00:07:49.534 21:59:31 -- accel/accel.sh@19 -- # read -r var val 00:07:49.534 21:59:31 -- accel/accel.sh@20 -- # val= 00:07:49.534 21:59:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.534 21:59:31 -- accel/accel.sh@19 -- # IFS=: 00:07:49.534 21:59:31 -- accel/accel.sh@19 -- # read -r var val 00:07:49.534 21:59:31 -- accel/accel.sh@20 -- # val= 00:07:49.534 21:59:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.534 21:59:31 -- accel/accel.sh@19 -- # IFS=: 00:07:49.534 21:59:31 -- accel/accel.sh@19 -- # read -r var val 00:07:49.534 21:59:31 -- accel/accel.sh@20 -- # val= 00:07:49.534 21:59:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.534 21:59:31 -- accel/accel.sh@19 -- # IFS=: 00:07:49.534 21:59:31 -- accel/accel.sh@19 -- # read -r var val 00:07:49.534 21:59:31 -- accel/accel.sh@20 -- # val= 00:07:49.534 21:59:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.534 21:59:31 -- accel/accel.sh@19 -- # IFS=: 00:07:49.534 21:59:31 -- accel/accel.sh@19 -- # read -r var val 00:07:49.534 21:59:31 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.534 21:59:31 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:49.534 21:59:31 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.534 00:07:49.534 real 0m1.501s 00:07:49.534 user 0m1.348s 00:07:49.534 sys 0m0.156s 00:07:49.534 21:59:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:49.534 21:59:31 -- common/autotest_common.sh@10 -- # set +x 00:07:49.534 ************************************ 00:07:49.534 END TEST accel_decomp 00:07:49.534 ************************************ 00:07:49.534 21:59:31 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:49.534 21:59:31 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:49.534 21:59:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.534 21:59:31 -- common/autotest_common.sh@10 -- # set +x 00:07:49.534 ************************************ 00:07:49.534 START TEST accel_decmop_full 00:07:49.534 ************************************ 00:07:49.534 21:59:31 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:49.534 21:59:31 -- accel/accel.sh@16 -- # local accel_opc 00:07:49.534 21:59:31 -- accel/accel.sh@17 -- # local accel_module 00:07:49.534 21:59:31 -- accel/accel.sh@19 -- # IFS=: 00:07:49.534 21:59:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:49.534 21:59:31 -- accel/accel.sh@19 -- # read -r var val 00:07:49.534 21:59:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:49.534 21:59:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.534 21:59:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.534 21:59:31 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.534 21:59:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.534 21:59:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.534 21:59:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.534 21:59:31 -- accel/accel.sh@40 -- # local IFS=, 00:07:49.534 21:59:31 -- accel/accel.sh@41 -- # jq -r . 00:07:49.534 [2024-04-24 21:59:31.656682] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:49.534 [2024-04-24 21:59:31.656748] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3850036 ] 00:07:49.534 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.534 [2024-04-24 21:59:31.723888] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.791 [2024-04-24 21:59:31.845178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.791 21:59:31 -- accel/accel.sh@20 -- # val= 00:07:49.791 21:59:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.791 21:59:31 -- accel/accel.sh@19 -- # IFS=: 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # read -r var val 00:07:49.792 21:59:31 -- accel/accel.sh@20 -- # val= 00:07:49.792 21:59:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # IFS=: 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # read -r var val 00:07:49.792 21:59:31 -- accel/accel.sh@20 -- # val= 00:07:49.792 21:59:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # IFS=: 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # read -r var val 00:07:49.792 21:59:31 -- accel/accel.sh@20 -- # val=0x1 00:07:49.792 21:59:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # IFS=: 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # read -r var val 00:07:49.792 21:59:31 -- accel/accel.sh@20 -- # val= 00:07:49.792 21:59:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # IFS=: 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # read -r var val 00:07:49.792 21:59:31 -- accel/accel.sh@20 -- # val= 00:07:49.792 21:59:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # IFS=: 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # read -r var val 00:07:49.792 21:59:31 -- accel/accel.sh@20 -- # val=decompress 00:07:49.792 21:59:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.792 21:59:31 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # IFS=: 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # read -r var val 00:07:49.792 21:59:31 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:49.792 21:59:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # IFS=: 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # read -r var val 00:07:49.792 21:59:31 -- accel/accel.sh@20 -- # val= 00:07:49.792 21:59:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # IFS=: 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # read -r var val 00:07:49.792 21:59:31 -- accel/accel.sh@20 -- # val=software 00:07:49.792 21:59:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.792 21:59:31 -- accel/accel.sh@22 -- # accel_module=software 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # IFS=: 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # read -r var val 00:07:49.792 21:59:31 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:49.792 21:59:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # IFS=: 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # read -r var val 00:07:49.792 21:59:31 -- accel/accel.sh@20 -- # val=32 00:07:49.792 21:59:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # IFS=: 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # read -r var val 00:07:49.792 21:59:31 -- accel/accel.sh@20 -- # val=32 00:07:49.792 21:59:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # IFS=: 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # read -r var val 00:07:49.792 21:59:31 -- accel/accel.sh@20 -- # val=1 00:07:49.792 21:59:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # IFS=: 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # read -r var val 00:07:49.792 21:59:31 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.792 21:59:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # IFS=: 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # read -r var val 00:07:49.792 21:59:31 -- accel/accel.sh@20 -- # val=Yes 00:07:49.792 21:59:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # IFS=: 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # read -r var val 00:07:49.792 21:59:31 -- accel/accel.sh@20 -- # val= 00:07:49.792 21:59:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # IFS=: 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # read -r var val 00:07:49.792 21:59:31 -- accel/accel.sh@20 -- # val= 00:07:49.792 21:59:31 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # IFS=: 00:07:49.792 21:59:31 -- accel/accel.sh@19 -- # read -r var val 00:07:51.163 21:59:33 -- accel/accel.sh@20 -- # val= 00:07:51.163 21:59:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.163 21:59:33 -- accel/accel.sh@19 -- # IFS=: 00:07:51.163 21:59:33 -- accel/accel.sh@19 -- # read -r var val 00:07:51.163 21:59:33 -- accel/accel.sh@20 -- # val= 00:07:51.163 21:59:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.163 21:59:33 -- accel/accel.sh@19 -- # IFS=: 00:07:51.163 21:59:33 -- accel/accel.sh@19 -- # read -r var val 00:07:51.163 21:59:33 -- accel/accel.sh@20 -- # val= 00:07:51.163 21:59:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.163 21:59:33 -- accel/accel.sh@19 -- # IFS=: 00:07:51.163 21:59:33 -- accel/accel.sh@19 -- # read -r var val 00:07:51.163 21:59:33 -- accel/accel.sh@20 -- # val= 00:07:51.163 21:59:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.163 21:59:33 -- accel/accel.sh@19 -- # IFS=: 00:07:51.163 21:59:33 -- accel/accel.sh@19 -- # read -r var val 00:07:51.163 21:59:33 -- accel/accel.sh@20 -- # val= 00:07:51.163 21:59:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.163 21:59:33 -- accel/accel.sh@19 -- # IFS=: 00:07:51.163 21:59:33 -- accel/accel.sh@19 -- # read -r var val 00:07:51.163 21:59:33 -- accel/accel.sh@20 -- # val= 00:07:51.163 21:59:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.163 21:59:33 -- accel/accel.sh@19 -- # IFS=: 00:07:51.163 21:59:33 -- accel/accel.sh@19 -- # read -r var val 00:07:51.163 21:59:33 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:51.163 21:59:33 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:51.163 21:59:33 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.163 00:07:51.163 real 0m1.508s 00:07:51.163 user 0m1.359s 00:07:51.163 sys 0m0.151s 00:07:51.163 21:59:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:51.163 21:59:33 -- common/autotest_common.sh@10 -- # set +x 00:07:51.163 ************************************ 00:07:51.163 END TEST accel_decmop_full 00:07:51.163 ************************************ 00:07:51.163 21:59:33 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:51.163 21:59:33 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:51.163 21:59:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.163 21:59:33 -- common/autotest_common.sh@10 -- # set +x 00:07:51.163 ************************************ 00:07:51.163 START TEST accel_decomp_mcore 00:07:51.163 ************************************ 00:07:51.163 21:59:33 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:51.163 21:59:33 -- accel/accel.sh@16 -- # local accel_opc 00:07:51.163 21:59:33 -- accel/accel.sh@17 -- # local accel_module 00:07:51.163 21:59:33 -- accel/accel.sh@19 -- # IFS=: 00:07:51.163 21:59:33 -- accel/accel.sh@19 -- # read -r var val 00:07:51.163 21:59:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:51.163 21:59:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:51.163 21:59:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.163 21:59:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:51.163 21:59:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:51.163 21:59:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.163 21:59:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.163 21:59:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:51.163 21:59:33 -- accel/accel.sh@40 -- # local IFS=, 00:07:51.163 21:59:33 -- accel/accel.sh@41 -- # jq -r . 00:07:51.163 [2024-04-24 21:59:33.339079] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:51.163 [2024-04-24 21:59:33.339143] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3850212 ] 00:07:51.163 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.163 [2024-04-24 21:59:33.413253] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:51.421 [2024-04-24 21:59:33.537690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.421 [2024-04-24 21:59:33.537747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.421 [2024-04-24 21:59:33.537800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.421 [2024-04-24 21:59:33.537804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.421 21:59:33 -- accel/accel.sh@20 -- # val= 00:07:51.421 21:59:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.421 21:59:33 -- accel/accel.sh@19 -- # IFS=: 00:07:51.421 21:59:33 -- accel/accel.sh@19 -- # read -r var val 00:07:51.421 21:59:33 -- accel/accel.sh@20 -- # val= 00:07:51.421 21:59:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.421 21:59:33 -- accel/accel.sh@19 -- # IFS=: 00:07:51.421 21:59:33 -- accel/accel.sh@19 -- # read -r var val 00:07:51.421 21:59:33 -- accel/accel.sh@20 -- # val= 00:07:51.421 21:59:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.421 21:59:33 -- accel/accel.sh@19 -- # IFS=: 00:07:51.421 21:59:33 -- accel/accel.sh@19 -- # read -r var val 00:07:51.421 21:59:33 -- accel/accel.sh@20 -- # val=0xf 00:07:51.421 21:59:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.421 21:59:33 -- accel/accel.sh@19 -- # IFS=: 00:07:51.421 21:59:33 -- accel/accel.sh@19 -- # read -r var val 00:07:51.421 21:59:33 -- accel/accel.sh@20 -- # val= 00:07:51.421 21:59:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.421 21:59:33 -- accel/accel.sh@19 -- # IFS=: 00:07:51.421 21:59:33 -- accel/accel.sh@19 -- # read -r var val 00:07:51.421 21:59:33 -- accel/accel.sh@20 -- # val= 00:07:51.421 21:59:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.421 21:59:33 -- accel/accel.sh@19 -- # IFS=: 00:07:51.421 21:59:33 -- accel/accel.sh@19 -- # read -r var val 00:07:51.421 21:59:33 -- accel/accel.sh@20 -- # val=decompress 00:07:51.421 21:59:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.421 21:59:33 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:51.421 21:59:33 -- accel/accel.sh@19 -- # IFS=: 00:07:51.421 21:59:33 -- accel/accel.sh@19 -- # read -r var val 00:07:51.421 21:59:33 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:51.421 21:59:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.421 21:59:33 -- accel/accel.sh@19 -- # IFS=: 00:07:51.421 21:59:33 -- accel/accel.sh@19 -- # read -r var val 00:07:51.421 21:59:33 -- accel/accel.sh@20 -- # val= 00:07:51.421 21:59:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.421 21:59:33 -- accel/accel.sh@19 -- # IFS=: 00:07:51.421 21:59:33 -- accel/accel.sh@19 -- # read -r var val 00:07:51.421 21:59:33 -- accel/accel.sh@20 -- # val=software 00:07:51.421 21:59:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.421 21:59:33 -- accel/accel.sh@22 -- # accel_module=software 00:07:51.421 21:59:33 -- accel/accel.sh@19 -- # IFS=: 00:07:51.421 21:59:33 -- accel/accel.sh@19 -- # read -r var val 00:07:51.421 21:59:33 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:51.422 21:59:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.422 21:59:33 -- accel/accel.sh@19 -- # IFS=: 00:07:51.422 21:59:33 -- accel/accel.sh@19 -- # read -r var val 00:07:51.422 21:59:33 -- accel/accel.sh@20 -- # val=32 00:07:51.422 21:59:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.422 21:59:33 -- accel/accel.sh@19 -- # IFS=: 00:07:51.422 21:59:33 -- accel/accel.sh@19 -- # read -r var val 00:07:51.422 21:59:33 -- accel/accel.sh@20 -- # val=32 00:07:51.422 21:59:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.422 21:59:33 -- accel/accel.sh@19 -- # IFS=: 00:07:51.422 21:59:33 -- accel/accel.sh@19 -- # read -r var val 00:07:51.422 21:59:33 -- accel/accel.sh@20 -- # val=1 00:07:51.422 21:59:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.422 21:59:33 -- accel/accel.sh@19 -- # IFS=: 00:07:51.422 21:59:33 -- accel/accel.sh@19 -- # read -r var val 00:07:51.422 21:59:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:51.422 21:59:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.422 21:59:33 -- accel/accel.sh@19 -- # IFS=: 00:07:51.422 21:59:33 -- accel/accel.sh@19 -- # read -r var val 00:07:51.422 21:59:33 -- accel/accel.sh@20 -- # val=Yes 00:07:51.422 21:59:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.422 21:59:33 -- accel/accel.sh@19 -- # IFS=: 00:07:51.422 21:59:33 -- accel/accel.sh@19 -- # read -r var val 00:07:51.422 21:59:33 -- accel/accel.sh@20 -- # val= 00:07:51.422 21:59:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.422 21:59:33 -- accel/accel.sh@19 -- # IFS=: 00:07:51.422 21:59:33 -- accel/accel.sh@19 -- # read -r var val 00:07:51.422 21:59:33 -- accel/accel.sh@20 -- # val= 00:07:51.422 21:59:33 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.422 21:59:33 -- accel/accel.sh@19 -- # IFS=: 00:07:51.422 21:59:33 -- accel/accel.sh@19 -- # read -r var val 00:07:52.792 21:59:34 -- accel/accel.sh@20 -- # val= 00:07:52.792 21:59:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.792 21:59:34 -- accel/accel.sh@19 -- # IFS=: 00:07:52.792 21:59:34 -- accel/accel.sh@19 -- # read -r var val 00:07:52.792 21:59:34 -- accel/accel.sh@20 -- # val= 00:07:52.793 21:59:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.793 21:59:34 -- accel/accel.sh@19 -- # IFS=: 00:07:52.793 21:59:34 -- accel/accel.sh@19 -- # read -r var val 00:07:52.793 21:59:34 -- accel/accel.sh@20 -- # val= 00:07:52.793 21:59:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.793 21:59:34 -- accel/accel.sh@19 -- # IFS=: 00:07:52.793 21:59:34 -- accel/accel.sh@19 -- # read -r var val 00:07:52.793 21:59:34 -- accel/accel.sh@20 -- # val= 00:07:52.793 21:59:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.793 21:59:34 -- accel/accel.sh@19 -- # IFS=: 00:07:52.793 21:59:34 -- accel/accel.sh@19 -- # read -r var val 00:07:52.793 21:59:34 -- accel/accel.sh@20 -- # val= 00:07:52.793 21:59:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.793 21:59:34 -- accel/accel.sh@19 -- # IFS=: 00:07:52.793 21:59:34 -- accel/accel.sh@19 -- # read -r var val 00:07:52.793 21:59:34 -- accel/accel.sh@20 -- # val= 00:07:52.793 21:59:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.793 21:59:34 -- accel/accel.sh@19 -- # IFS=: 00:07:52.793 21:59:34 -- accel/accel.sh@19 -- # read -r var val 00:07:52.793 21:59:34 -- accel/accel.sh@20 -- # val= 00:07:52.793 21:59:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.793 21:59:34 -- accel/accel.sh@19 -- # IFS=: 00:07:52.793 21:59:34 -- accel/accel.sh@19 -- # read -r var val 00:07:52.793 21:59:34 -- accel/accel.sh@20 -- # val= 00:07:52.793 21:59:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.793 21:59:34 -- accel/accel.sh@19 -- # IFS=: 00:07:52.793 21:59:34 -- accel/accel.sh@19 -- # read -r var val 00:07:52.793 21:59:34 -- accel/accel.sh@20 -- # val= 00:07:52.793 21:59:34 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.793 21:59:34 -- accel/accel.sh@19 -- # IFS=: 00:07:52.793 21:59:34 -- accel/accel.sh@19 -- # read -r var val 00:07:52.793 21:59:34 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:52.793 21:59:34 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:52.793 21:59:34 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.793 00:07:52.793 real 0m1.516s 00:07:52.793 user 0m4.839s 00:07:52.793 sys 0m0.158s 00:07:52.793 21:59:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:52.793 21:59:34 -- common/autotest_common.sh@10 -- # set +x 00:07:52.793 ************************************ 00:07:52.793 END TEST accel_decomp_mcore 00:07:52.793 ************************************ 00:07:52.793 21:59:34 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:52.793 21:59:34 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:52.793 21:59:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:52.793 21:59:34 -- common/autotest_common.sh@10 -- # set +x 00:07:52.793 ************************************ 00:07:52.793 START TEST accel_decomp_full_mcore 00:07:52.793 ************************************ 00:07:52.793 21:59:34 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:52.793 21:59:34 -- accel/accel.sh@16 -- # local accel_opc 00:07:52.793 21:59:34 -- accel/accel.sh@17 -- # local accel_module 00:07:52.793 21:59:34 -- accel/accel.sh@19 -- # IFS=: 00:07:52.793 21:59:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:52.793 21:59:34 -- accel/accel.sh@19 -- # read -r var val 00:07:52.793 21:59:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:52.793 21:59:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.793 21:59:34 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.793 21:59:34 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.793 21:59:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.793 21:59:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.793 21:59:34 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.793 21:59:34 -- accel/accel.sh@40 -- # local IFS=, 00:07:52.793 21:59:34 -- accel/accel.sh@41 -- # jq -r . 00:07:52.793 [2024-04-24 21:59:35.000360] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:52.793 [2024-04-24 21:59:35.000441] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3850385 ] 00:07:52.793 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.051 [2024-04-24 21:59:35.068883] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:53.051 [2024-04-24 21:59:35.192595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.051 [2024-04-24 21:59:35.192650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.051 [2024-04-24 21:59:35.192702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.051 [2024-04-24 21:59:35.192705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.051 21:59:35 -- accel/accel.sh@20 -- # val= 00:07:53.051 21:59:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # IFS=: 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # read -r var val 00:07:53.051 21:59:35 -- accel/accel.sh@20 -- # val= 00:07:53.051 21:59:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # IFS=: 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # read -r var val 00:07:53.051 21:59:35 -- accel/accel.sh@20 -- # val= 00:07:53.051 21:59:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # IFS=: 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # read -r var val 00:07:53.051 21:59:35 -- accel/accel.sh@20 -- # val=0xf 00:07:53.051 21:59:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # IFS=: 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # read -r var val 00:07:53.051 21:59:35 -- accel/accel.sh@20 -- # val= 00:07:53.051 21:59:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # IFS=: 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # read -r var val 00:07:53.051 21:59:35 -- accel/accel.sh@20 -- # val= 00:07:53.051 21:59:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # IFS=: 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # read -r var val 00:07:53.051 21:59:35 -- accel/accel.sh@20 -- # val=decompress 00:07:53.051 21:59:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.051 21:59:35 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # IFS=: 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # read -r var val 00:07:53.051 21:59:35 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:53.051 21:59:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # IFS=: 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # read -r var val 00:07:53.051 21:59:35 -- accel/accel.sh@20 -- # val= 00:07:53.051 21:59:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # IFS=: 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # read -r var val 00:07:53.051 21:59:35 -- accel/accel.sh@20 -- # val=software 00:07:53.051 21:59:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.051 21:59:35 -- accel/accel.sh@22 -- # accel_module=software 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # IFS=: 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # read -r var val 00:07:53.051 21:59:35 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:53.051 21:59:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # IFS=: 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # read -r var val 00:07:53.051 21:59:35 -- accel/accel.sh@20 -- # val=32 00:07:53.051 21:59:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # IFS=: 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # read -r var val 00:07:53.051 21:59:35 -- accel/accel.sh@20 -- # val=32 00:07:53.051 21:59:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # IFS=: 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # read -r var val 00:07:53.051 21:59:35 -- accel/accel.sh@20 -- # val=1 00:07:53.051 21:59:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # IFS=: 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # read -r var val 00:07:53.051 21:59:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:53.051 21:59:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # IFS=: 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # read -r var val 00:07:53.051 21:59:35 -- accel/accel.sh@20 -- # val=Yes 00:07:53.051 21:59:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # IFS=: 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # read -r var val 00:07:53.051 21:59:35 -- accel/accel.sh@20 -- # val= 00:07:53.051 21:59:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # IFS=: 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # read -r var val 00:07:53.051 21:59:35 -- accel/accel.sh@20 -- # val= 00:07:53.051 21:59:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # IFS=: 00:07:53.051 21:59:35 -- accel/accel.sh@19 -- # read -r var val 00:07:54.424 21:59:36 -- accel/accel.sh@20 -- # val= 00:07:54.424 21:59:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.424 21:59:36 -- accel/accel.sh@19 -- # IFS=: 00:07:54.424 21:59:36 -- accel/accel.sh@19 -- # read -r var val 00:07:54.424 21:59:36 -- accel/accel.sh@20 -- # val= 00:07:54.424 21:59:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.424 21:59:36 -- accel/accel.sh@19 -- # IFS=: 00:07:54.424 21:59:36 -- accel/accel.sh@19 -- # read -r var val 00:07:54.424 21:59:36 -- accel/accel.sh@20 -- # val= 00:07:54.424 21:59:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.424 21:59:36 -- accel/accel.sh@19 -- # IFS=: 00:07:54.424 21:59:36 -- accel/accel.sh@19 -- # read -r var val 00:07:54.424 21:59:36 -- accel/accel.sh@20 -- # val= 00:07:54.424 21:59:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.424 21:59:36 -- accel/accel.sh@19 -- # IFS=: 00:07:54.424 21:59:36 -- accel/accel.sh@19 -- # read -r var val 00:07:54.424 21:59:36 -- accel/accel.sh@20 -- # val= 00:07:54.424 21:59:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.424 21:59:36 -- accel/accel.sh@19 -- # IFS=: 00:07:54.424 21:59:36 -- accel/accel.sh@19 -- # read -r var val 00:07:54.424 21:59:36 -- accel/accel.sh@20 -- # val= 00:07:54.424 21:59:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.424 21:59:36 -- accel/accel.sh@19 -- # IFS=: 00:07:54.424 21:59:36 -- accel/accel.sh@19 -- # read -r var val 00:07:54.424 21:59:36 -- accel/accel.sh@20 -- # val= 00:07:54.424 21:59:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.424 21:59:36 -- accel/accel.sh@19 -- # IFS=: 00:07:54.424 21:59:36 -- accel/accel.sh@19 -- # read -r var val 00:07:54.424 21:59:36 -- accel/accel.sh@20 -- # val= 00:07:54.424 21:59:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.424 21:59:36 -- accel/accel.sh@19 -- # IFS=: 00:07:54.424 21:59:36 -- accel/accel.sh@19 -- # read -r var val 00:07:54.424 21:59:36 -- accel/accel.sh@20 -- # val= 00:07:54.424 21:59:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.424 21:59:36 -- accel/accel.sh@19 -- # IFS=: 00:07:54.424 21:59:36 -- accel/accel.sh@19 -- # read -r var val 00:07:54.424 21:59:36 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:54.424 21:59:36 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:54.424 21:59:36 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.424 00:07:54.424 real 0m1.521s 00:07:54.424 user 0m4.886s 00:07:54.424 sys 0m0.161s 00:07:54.424 21:59:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:54.424 21:59:36 -- common/autotest_common.sh@10 -- # set +x 00:07:54.424 ************************************ 00:07:54.424 END TEST accel_decomp_full_mcore 00:07:54.424 ************************************ 00:07:54.424 21:59:36 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:54.424 21:59:36 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:54.424 21:59:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.424 21:59:36 -- common/autotest_common.sh@10 -- # set +x 00:07:54.424 ************************************ 00:07:54.424 START TEST accel_decomp_mthread 00:07:54.424 ************************************ 00:07:54.424 21:59:36 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:54.424 21:59:36 -- accel/accel.sh@16 -- # local accel_opc 00:07:54.424 21:59:36 -- accel/accel.sh@17 -- # local accel_module 00:07:54.424 21:59:36 -- accel/accel.sh@19 -- # IFS=: 00:07:54.424 21:59:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:54.424 21:59:36 -- accel/accel.sh@19 -- # read -r var val 00:07:54.425 21:59:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:54.425 21:59:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:54.425 21:59:36 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:54.425 21:59:36 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:54.425 21:59:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.425 21:59:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.425 21:59:36 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:54.425 21:59:36 -- accel/accel.sh@40 -- # local IFS=, 00:07:54.425 21:59:36 -- accel/accel.sh@41 -- # jq -r . 00:07:54.425 [2024-04-24 21:59:36.675176] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:54.425 [2024-04-24 21:59:36.675240] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3850661 ] 00:07:54.683 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.683 [2024-04-24 21:59:36.743135] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.683 [2024-04-24 21:59:36.864271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.683 21:59:36 -- accel/accel.sh@20 -- # val= 00:07:54.683 21:59:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # IFS=: 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # read -r var val 00:07:54.683 21:59:36 -- accel/accel.sh@20 -- # val= 00:07:54.683 21:59:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # IFS=: 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # read -r var val 00:07:54.683 21:59:36 -- accel/accel.sh@20 -- # val= 00:07:54.683 21:59:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # IFS=: 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # read -r var val 00:07:54.683 21:59:36 -- accel/accel.sh@20 -- # val=0x1 00:07:54.683 21:59:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # IFS=: 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # read -r var val 00:07:54.683 21:59:36 -- accel/accel.sh@20 -- # val= 00:07:54.683 21:59:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # IFS=: 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # read -r var val 00:07:54.683 21:59:36 -- accel/accel.sh@20 -- # val= 00:07:54.683 21:59:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # IFS=: 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # read -r var val 00:07:54.683 21:59:36 -- accel/accel.sh@20 -- # val=decompress 00:07:54.683 21:59:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.683 21:59:36 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # IFS=: 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # read -r var val 00:07:54.683 21:59:36 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:54.683 21:59:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # IFS=: 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # read -r var val 00:07:54.683 21:59:36 -- accel/accel.sh@20 -- # val= 00:07:54.683 21:59:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # IFS=: 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # read -r var val 00:07:54.683 21:59:36 -- accel/accel.sh@20 -- # val=software 00:07:54.683 21:59:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.683 21:59:36 -- accel/accel.sh@22 -- # accel_module=software 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # IFS=: 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # read -r var val 00:07:54.683 21:59:36 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:54.683 21:59:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # IFS=: 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # read -r var val 00:07:54.683 21:59:36 -- accel/accel.sh@20 -- # val=32 00:07:54.683 21:59:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # IFS=: 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # read -r var val 00:07:54.683 21:59:36 -- accel/accel.sh@20 -- # val=32 00:07:54.683 21:59:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # IFS=: 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # read -r var val 00:07:54.683 21:59:36 -- accel/accel.sh@20 -- # val=2 00:07:54.683 21:59:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # IFS=: 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # read -r var val 00:07:54.683 21:59:36 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:54.683 21:59:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # IFS=: 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # read -r var val 00:07:54.683 21:59:36 -- accel/accel.sh@20 -- # val=Yes 00:07:54.683 21:59:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # IFS=: 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # read -r var val 00:07:54.683 21:59:36 -- accel/accel.sh@20 -- # val= 00:07:54.683 21:59:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # IFS=: 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # read -r var val 00:07:54.683 21:59:36 -- accel/accel.sh@20 -- # val= 00:07:54.683 21:59:36 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.683 21:59:36 -- accel/accel.sh@19 -- # IFS=: 00:07:54.941 21:59:36 -- accel/accel.sh@19 -- # read -r var val 00:07:56.312 21:59:38 -- accel/accel.sh@20 -- # val= 00:07:56.312 21:59:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.312 21:59:38 -- accel/accel.sh@19 -- # IFS=: 00:07:56.312 21:59:38 -- accel/accel.sh@19 -- # read -r var val 00:07:56.312 21:59:38 -- accel/accel.sh@20 -- # val= 00:07:56.312 21:59:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.312 21:59:38 -- accel/accel.sh@19 -- # IFS=: 00:07:56.312 21:59:38 -- accel/accel.sh@19 -- # read -r var val 00:07:56.312 21:59:38 -- accel/accel.sh@20 -- # val= 00:07:56.312 21:59:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.312 21:59:38 -- accel/accel.sh@19 -- # IFS=: 00:07:56.312 21:59:38 -- accel/accel.sh@19 -- # read -r var val 00:07:56.313 21:59:38 -- accel/accel.sh@20 -- # val= 00:07:56.313 21:59:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.313 21:59:38 -- accel/accel.sh@19 -- # IFS=: 00:07:56.313 21:59:38 -- accel/accel.sh@19 -- # read -r var val 00:07:56.313 21:59:38 -- accel/accel.sh@20 -- # val= 00:07:56.313 21:59:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.313 21:59:38 -- accel/accel.sh@19 -- # IFS=: 00:07:56.313 21:59:38 -- accel/accel.sh@19 -- # read -r var val 00:07:56.313 21:59:38 -- accel/accel.sh@20 -- # val= 00:07:56.313 21:59:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.313 21:59:38 -- accel/accel.sh@19 -- # IFS=: 00:07:56.313 21:59:38 -- accel/accel.sh@19 -- # read -r var val 00:07:56.313 21:59:38 -- accel/accel.sh@20 -- # val= 00:07:56.313 21:59:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.313 21:59:38 -- accel/accel.sh@19 -- # IFS=: 00:07:56.313 21:59:38 -- accel/accel.sh@19 -- # read -r var val 00:07:56.313 21:59:38 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:56.313 21:59:38 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:56.313 21:59:38 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:56.313 00:07:56.313 real 0m1.500s 00:07:56.313 user 0m1.345s 00:07:56.313 sys 0m0.157s 00:07:56.313 21:59:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:56.313 21:59:38 -- common/autotest_common.sh@10 -- # set +x 00:07:56.313 ************************************ 00:07:56.313 END TEST accel_decomp_mthread 00:07:56.313 ************************************ 00:07:56.313 21:59:38 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:56.313 21:59:38 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:56.313 21:59:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.313 21:59:38 -- common/autotest_common.sh@10 -- # set +x 00:07:56.313 ************************************ 00:07:56.313 START TEST accel_deomp_full_mthread 00:07:56.313 ************************************ 00:07:56.313 21:59:38 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:56.313 21:59:38 -- accel/accel.sh@16 -- # local accel_opc 00:07:56.313 21:59:38 -- accel/accel.sh@17 -- # local accel_module 00:07:56.313 21:59:38 -- accel/accel.sh@19 -- # IFS=: 00:07:56.313 21:59:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:56.313 21:59:38 -- accel/accel.sh@19 -- # read -r var val 00:07:56.313 21:59:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:56.313 21:59:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:56.313 21:59:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:56.313 21:59:38 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:56.313 21:59:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.313 21:59:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.313 21:59:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:56.313 21:59:38 -- accel/accel.sh@40 -- # local IFS=, 00:07:56.313 21:59:38 -- accel/accel.sh@41 -- # jq -r . 00:07:56.313 [2024-04-24 21:59:38.312658] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:56.313 [2024-04-24 21:59:38.312739] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3850827 ] 00:07:56.313 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.313 [2024-04-24 21:59:38.383843] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.313 [2024-04-24 21:59:38.506468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.570 21:59:38 -- accel/accel.sh@20 -- # val= 00:07:56.570 21:59:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # IFS=: 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # read -r var val 00:07:56.570 21:59:38 -- accel/accel.sh@20 -- # val= 00:07:56.570 21:59:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # IFS=: 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # read -r var val 00:07:56.570 21:59:38 -- accel/accel.sh@20 -- # val= 00:07:56.570 21:59:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # IFS=: 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # read -r var val 00:07:56.570 21:59:38 -- accel/accel.sh@20 -- # val=0x1 00:07:56.570 21:59:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # IFS=: 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # read -r var val 00:07:56.570 21:59:38 -- accel/accel.sh@20 -- # val= 00:07:56.570 21:59:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # IFS=: 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # read -r var val 00:07:56.570 21:59:38 -- accel/accel.sh@20 -- # val= 00:07:56.570 21:59:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # IFS=: 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # read -r var val 00:07:56.570 21:59:38 -- accel/accel.sh@20 -- # val=decompress 00:07:56.570 21:59:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.570 21:59:38 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # IFS=: 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # read -r var val 00:07:56.570 21:59:38 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:56.570 21:59:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # IFS=: 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # read -r var val 00:07:56.570 21:59:38 -- accel/accel.sh@20 -- # val= 00:07:56.570 21:59:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # IFS=: 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # read -r var val 00:07:56.570 21:59:38 -- accel/accel.sh@20 -- # val=software 00:07:56.570 21:59:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.570 21:59:38 -- accel/accel.sh@22 -- # accel_module=software 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # IFS=: 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # read -r var val 00:07:56.570 21:59:38 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:56.570 21:59:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # IFS=: 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # read -r var val 00:07:56.570 21:59:38 -- accel/accel.sh@20 -- # val=32 00:07:56.570 21:59:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # IFS=: 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # read -r var val 00:07:56.570 21:59:38 -- accel/accel.sh@20 -- # val=32 00:07:56.570 21:59:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # IFS=: 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # read -r var val 00:07:56.570 21:59:38 -- accel/accel.sh@20 -- # val=2 00:07:56.570 21:59:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # IFS=: 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # read -r var val 00:07:56.570 21:59:38 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:56.570 21:59:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # IFS=: 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # read -r var val 00:07:56.570 21:59:38 -- accel/accel.sh@20 -- # val=Yes 00:07:56.570 21:59:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # IFS=: 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # read -r var val 00:07:56.570 21:59:38 -- accel/accel.sh@20 -- # val= 00:07:56.570 21:59:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # IFS=: 00:07:56.570 21:59:38 -- accel/accel.sh@19 -- # read -r var val 00:07:56.571 21:59:38 -- accel/accel.sh@20 -- # val= 00:07:56.571 21:59:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.571 21:59:38 -- accel/accel.sh@19 -- # IFS=: 00:07:56.571 21:59:38 -- accel/accel.sh@19 -- # read -r var val 00:07:57.943 21:59:39 -- accel/accel.sh@20 -- # val= 00:07:57.943 21:59:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.943 21:59:39 -- accel/accel.sh@19 -- # IFS=: 00:07:57.943 21:59:39 -- accel/accel.sh@19 -- # read -r var val 00:07:57.943 21:59:39 -- accel/accel.sh@20 -- # val= 00:07:57.943 21:59:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.943 21:59:39 -- accel/accel.sh@19 -- # IFS=: 00:07:57.943 21:59:39 -- accel/accel.sh@19 -- # read -r var val 00:07:57.943 21:59:39 -- accel/accel.sh@20 -- # val= 00:07:57.943 21:59:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.943 21:59:39 -- accel/accel.sh@19 -- # IFS=: 00:07:57.943 21:59:39 -- accel/accel.sh@19 -- # read -r var val 00:07:57.943 21:59:39 -- accel/accel.sh@20 -- # val= 00:07:57.943 21:59:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.943 21:59:39 -- accel/accel.sh@19 -- # IFS=: 00:07:57.943 21:59:39 -- accel/accel.sh@19 -- # read -r var val 00:07:57.943 21:59:39 -- accel/accel.sh@20 -- # val= 00:07:57.943 21:59:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.943 21:59:39 -- accel/accel.sh@19 -- # IFS=: 00:07:57.943 21:59:39 -- accel/accel.sh@19 -- # read -r var val 00:07:57.943 21:59:39 -- accel/accel.sh@20 -- # val= 00:07:57.943 21:59:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.943 21:59:39 -- accel/accel.sh@19 -- # IFS=: 00:07:57.943 21:59:39 -- accel/accel.sh@19 -- # read -r var val 00:07:57.943 21:59:39 -- accel/accel.sh@20 -- # val= 00:07:57.943 21:59:39 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.943 21:59:39 -- accel/accel.sh@19 -- # IFS=: 00:07:57.943 21:59:39 -- accel/accel.sh@19 -- # read -r var val 00:07:57.943 21:59:39 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:57.943 21:59:39 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:57.943 21:59:39 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:57.943 00:07:57.943 real 0m1.528s 00:07:57.943 user 0m1.370s 00:07:57.943 sys 0m0.160s 00:07:57.943 21:59:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:57.943 21:59:39 -- common/autotest_common.sh@10 -- # set +x 00:07:57.943 ************************************ 00:07:57.943 END TEST accel_deomp_full_mthread 00:07:57.943 ************************************ 00:07:57.943 21:59:39 -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:57.943 21:59:39 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:57.943 21:59:39 -- accel/accel.sh@137 -- # build_accel_config 00:07:57.943 21:59:39 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:57.943 21:59:39 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:57.943 21:59:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.943 21:59:39 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:57.943 21:59:39 -- common/autotest_common.sh@10 -- # set +x 00:07:57.943 21:59:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.943 21:59:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.943 21:59:39 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:57.943 21:59:39 -- accel/accel.sh@40 -- # local IFS=, 00:07:57.943 21:59:39 -- accel/accel.sh@41 -- # jq -r . 00:07:57.943 ************************************ 00:07:57.943 START TEST accel_dif_functional_tests 00:07:57.943 ************************************ 00:07:57.943 21:59:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:57.943 [2024-04-24 21:59:39.995212] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:57.943 [2024-04-24 21:59:39.995304] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3851110 ] 00:07:57.943 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.943 [2024-04-24 21:59:40.068260] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:57.943 [2024-04-24 21:59:40.189007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.943 [2024-04-24 21:59:40.189062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.943 [2024-04-24 21:59:40.189066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.201 00:07:58.201 00:07:58.201 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.201 http://cunit.sourceforge.net/ 00:07:58.201 00:07:58.201 00:07:58.201 Suite: accel_dif 00:07:58.201 Test: verify: DIF generated, GUARD check ...passed 00:07:58.201 Test: verify: DIF generated, APPTAG check ...passed 00:07:58.201 Test: verify: DIF generated, REFTAG check ...passed 00:07:58.201 Test: verify: DIF not generated, GUARD check ...[2024-04-24 21:59:40.293218] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:58.201 [2024-04-24 21:59:40.293285] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:58.201 passed 00:07:58.201 Test: verify: DIF not generated, APPTAG check ...[2024-04-24 21:59:40.293338] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:58.201 [2024-04-24 21:59:40.293368] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:58.201 passed 00:07:58.201 Test: verify: DIF not generated, REFTAG check ...[2024-04-24 21:59:40.293424] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:58.201 [2024-04-24 21:59:40.293457] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:58.201 passed 00:07:58.201 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:58.201 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-24 21:59:40.293531] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:58.201 passed 00:07:58.201 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:58.201 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:58.201 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:58.201 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-24 21:59:40.293690] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:58.201 passed 00:07:58.201 Test: generate copy: DIF generated, GUARD check ...passed 00:07:58.201 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:58.201 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:58.201 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:58.201 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:58.201 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:58.201 Test: generate copy: iovecs-len validate ...[2024-04-24 21:59:40.293950] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:58.201 passed 00:07:58.201 Test: generate copy: buffer alignment validate ...passed 00:07:58.201 00:07:58.201 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.201 suites 1 1 n/a 0 0 00:07:58.201 tests 20 20 20 0 0 00:07:58.201 asserts 204 204 204 0 n/a 00:07:58.201 00:07:58.201 Elapsed time = 0.003 seconds 00:07:58.458 00:07:58.459 real 0m0.620s 00:07:58.459 user 0m0.892s 00:07:58.459 sys 0m0.197s 00:07:58.459 21:59:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:58.459 21:59:40 -- common/autotest_common.sh@10 -- # set +x 00:07:58.459 ************************************ 00:07:58.459 END TEST accel_dif_functional_tests 00:07:58.459 ************************************ 00:07:58.459 00:07:58.459 real 0m37.119s 00:07:58.459 user 0m39.245s 00:07:58.459 sys 0m6.139s 00:07:58.459 21:59:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:58.459 21:59:40 -- common/autotest_common.sh@10 -- # set +x 00:07:58.459 ************************************ 00:07:58.459 END TEST accel 00:07:58.459 ************************************ 00:07:58.459 21:59:40 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:58.459 21:59:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:58.459 21:59:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.459 21:59:40 -- common/autotest_common.sh@10 -- # set +x 00:07:58.716 ************************************ 00:07:58.716 START TEST accel_rpc 00:07:58.716 ************************************ 00:07:58.716 21:59:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:58.716 * Looking for test storage... 00:07:58.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:58.716 21:59:40 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:58.716 21:59:40 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3851246 00:07:58.716 21:59:40 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:58.716 21:59:40 -- accel/accel_rpc.sh@15 -- # waitforlisten 3851246 00:07:58.716 21:59:40 -- common/autotest_common.sh@817 -- # '[' -z 3851246 ']' 00:07:58.716 21:59:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.716 21:59:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:58.716 21:59:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.716 21:59:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:58.716 21:59:40 -- common/autotest_common.sh@10 -- # set +x 00:07:58.716 [2024-04-24 21:59:40.848648] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:07:58.716 [2024-04-24 21:59:40.848744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3851246 ] 00:07:58.716 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.716 [2024-04-24 21:59:40.918525] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.974 [2024-04-24 21:59:41.043450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.974 21:59:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:58.974 21:59:41 -- common/autotest_common.sh@850 -- # return 0 00:07:58.974 21:59:41 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:58.974 21:59:41 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:58.974 21:59:41 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:58.974 21:59:41 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:58.974 21:59:41 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:58.974 21:59:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:58.974 21:59:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.974 21:59:41 -- common/autotest_common.sh@10 -- # set +x 00:07:58.974 ************************************ 00:07:58.974 START TEST accel_assign_opcode 00:07:58.974 ************************************ 00:07:58.974 21:59:41 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:07:58.974 21:59:41 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:58.974 21:59:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:58.974 21:59:41 -- common/autotest_common.sh@10 -- # set +x 00:07:58.974 [2024-04-24 21:59:41.208401] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:58.974 21:59:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:58.974 21:59:41 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:58.974 21:59:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:58.974 21:59:41 -- common/autotest_common.sh@10 -- # set +x 00:07:58.974 [2024-04-24 21:59:41.216385] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:58.974 21:59:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:58.974 21:59:41 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:58.974 21:59:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:58.974 21:59:41 -- common/autotest_common.sh@10 -- # set +x 00:07:59.232 21:59:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.232 21:59:41 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:59.232 21:59:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.232 21:59:41 -- common/autotest_common.sh@10 -- # set +x 00:07:59.232 21:59:41 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:59.232 21:59:41 -- accel/accel_rpc.sh@42 -- # grep software 00:07:59.518 21:59:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.518 software 00:07:59.518 00:07:59.518 real 0m0.359s 00:07:59.518 user 0m0.084s 00:07:59.518 sys 0m0.007s 00:07:59.519 21:59:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:59.519 21:59:41 -- common/autotest_common.sh@10 -- # set +x 00:07:59.519 ************************************ 00:07:59.519 END TEST accel_assign_opcode 00:07:59.519 ************************************ 00:07:59.519 21:59:41 -- accel/accel_rpc.sh@55 -- # killprocess 3851246 00:07:59.519 21:59:41 -- common/autotest_common.sh@936 -- # '[' -z 3851246 ']' 00:07:59.519 21:59:41 -- common/autotest_common.sh@940 -- # kill -0 3851246 00:07:59.519 21:59:41 -- common/autotest_common.sh@941 -- # uname 00:07:59.519 21:59:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:59.519 21:59:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3851246 00:07:59.519 21:59:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:59.519 21:59:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:59.519 21:59:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3851246' 00:07:59.519 killing process with pid 3851246 00:07:59.519 21:59:41 -- common/autotest_common.sh@955 -- # kill 3851246 00:07:59.519 21:59:41 -- common/autotest_common.sh@960 -- # wait 3851246 00:08:00.085 00:08:00.085 real 0m1.369s 00:08:00.085 user 0m1.393s 00:08:00.085 sys 0m0.503s 00:08:00.085 21:59:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:00.085 21:59:42 -- common/autotest_common.sh@10 -- # set +x 00:08:00.085 ************************************ 00:08:00.085 END TEST accel_rpc 00:08:00.085 ************************************ 00:08:00.085 21:59:42 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:00.085 21:59:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:00.085 21:59:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:00.085 21:59:42 -- common/autotest_common.sh@10 -- # set +x 00:08:00.085 ************************************ 00:08:00.085 START TEST app_cmdline 00:08:00.085 ************************************ 00:08:00.085 21:59:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:00.085 * Looking for test storage... 00:08:00.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:00.085 21:59:42 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:00.085 21:59:42 -- app/cmdline.sh@17 -- # spdk_tgt_pid=3851532 00:08:00.085 21:59:42 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:00.085 21:59:42 -- app/cmdline.sh@18 -- # waitforlisten 3851532 00:08:00.085 21:59:42 -- common/autotest_common.sh@817 -- # '[' -z 3851532 ']' 00:08:00.085 21:59:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.085 21:59:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:00.085 21:59:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.085 21:59:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:00.086 21:59:42 -- common/autotest_common.sh@10 -- # set +x 00:08:00.343 [2024-04-24 21:59:42.365022] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:08:00.343 [2024-04-24 21:59:42.365123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3851532 ] 00:08:00.343 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.343 [2024-04-24 21:59:42.436736] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.343 [2024-04-24 21:59:42.557096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.601 21:59:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:00.601 21:59:42 -- common/autotest_common.sh@850 -- # return 0 00:08:00.601 21:59:42 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:00.859 { 00:08:00.859 "version": "SPDK v24.05-pre git sha1 4907d1565", 00:08:00.859 "fields": { 00:08:00.859 "major": 24, 00:08:00.859 "minor": 5, 00:08:00.859 "patch": 0, 00:08:00.859 "suffix": "-pre", 00:08:00.859 "commit": "4907d1565" 00:08:00.859 } 00:08:00.859 } 00:08:01.118 21:59:43 -- app/cmdline.sh@22 -- # expected_methods=() 00:08:01.118 21:59:43 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:01.118 21:59:43 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:01.118 21:59:43 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:01.118 21:59:43 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:01.118 21:59:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:01.118 21:59:43 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:01.118 21:59:43 -- common/autotest_common.sh@10 -- # set +x 00:08:01.118 21:59:43 -- app/cmdline.sh@26 -- # sort 00:08:01.118 21:59:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:01.118 21:59:43 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:01.118 21:59:43 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:01.118 21:59:43 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:01.118 21:59:43 -- common/autotest_common.sh@638 -- # local es=0 00:08:01.118 21:59:43 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:01.118 21:59:43 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:01.118 21:59:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:01.118 21:59:43 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:01.118 21:59:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:01.118 21:59:43 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:01.118 21:59:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:01.118 21:59:43 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:01.118 21:59:43 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:01.118 21:59:43 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:01.376 request: 00:08:01.376 { 00:08:01.376 "method": "env_dpdk_get_mem_stats", 00:08:01.376 "req_id": 1 00:08:01.376 } 00:08:01.376 Got JSON-RPC error response 00:08:01.376 response: 00:08:01.376 { 00:08:01.376 "code": -32601, 00:08:01.376 "message": "Method not found" 00:08:01.376 } 00:08:01.376 21:59:43 -- common/autotest_common.sh@641 -- # es=1 00:08:01.376 21:59:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:01.376 21:59:43 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:01.376 21:59:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:01.376 21:59:43 -- app/cmdline.sh@1 -- # killprocess 3851532 00:08:01.376 21:59:43 -- common/autotest_common.sh@936 -- # '[' -z 3851532 ']' 00:08:01.376 21:59:43 -- common/autotest_common.sh@940 -- # kill -0 3851532 00:08:01.376 21:59:43 -- common/autotest_common.sh@941 -- # uname 00:08:01.376 21:59:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:01.376 21:59:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3851532 00:08:01.376 21:59:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:01.376 21:59:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:01.376 21:59:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3851532' 00:08:01.376 killing process with pid 3851532 00:08:01.376 21:59:43 -- common/autotest_common.sh@955 -- # kill 3851532 00:08:01.376 21:59:43 -- common/autotest_common.sh@960 -- # wait 3851532 00:08:01.941 00:08:01.941 real 0m1.813s 00:08:01.941 user 0m2.291s 00:08:01.941 sys 0m0.544s 00:08:01.941 21:59:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:01.941 21:59:44 -- common/autotest_common.sh@10 -- # set +x 00:08:01.941 ************************************ 00:08:01.941 END TEST app_cmdline 00:08:01.941 ************************************ 00:08:01.941 21:59:44 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:01.941 21:59:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:01.941 21:59:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:01.941 21:59:44 -- common/autotest_common.sh@10 -- # set +x 00:08:01.941 ************************************ 00:08:01.941 START TEST version 00:08:01.941 ************************************ 00:08:01.941 21:59:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:02.199 * Looking for test storage... 00:08:02.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:02.199 21:59:44 -- app/version.sh@17 -- # get_header_version major 00:08:02.199 21:59:44 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:02.199 21:59:44 -- app/version.sh@14 -- # cut -f2 00:08:02.199 21:59:44 -- app/version.sh@14 -- # tr -d '"' 00:08:02.199 21:59:44 -- app/version.sh@17 -- # major=24 00:08:02.199 21:59:44 -- app/version.sh@18 -- # get_header_version minor 00:08:02.199 21:59:44 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:02.199 21:59:44 -- app/version.sh@14 -- # cut -f2 00:08:02.199 21:59:44 -- app/version.sh@14 -- # tr -d '"' 00:08:02.199 21:59:44 -- app/version.sh@18 -- # minor=5 00:08:02.199 21:59:44 -- app/version.sh@19 -- # get_header_version patch 00:08:02.199 21:59:44 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:02.199 21:59:44 -- app/version.sh@14 -- # cut -f2 00:08:02.199 21:59:44 -- app/version.sh@14 -- # tr -d '"' 00:08:02.199 21:59:44 -- app/version.sh@19 -- # patch=0 00:08:02.199 21:59:44 -- app/version.sh@20 -- # get_header_version suffix 00:08:02.199 21:59:44 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:02.199 21:59:44 -- app/version.sh@14 -- # cut -f2 00:08:02.199 21:59:44 -- app/version.sh@14 -- # tr -d '"' 00:08:02.199 21:59:44 -- app/version.sh@20 -- # suffix=-pre 00:08:02.199 21:59:44 -- app/version.sh@22 -- # version=24.5 00:08:02.199 21:59:44 -- app/version.sh@25 -- # (( patch != 0 )) 00:08:02.199 21:59:44 -- app/version.sh@28 -- # version=24.5rc0 00:08:02.200 21:59:44 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:02.200 21:59:44 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:02.200 21:59:44 -- app/version.sh@30 -- # py_version=24.5rc0 00:08:02.200 21:59:44 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:08:02.200 00:08:02.200 real 0m0.107s 00:08:02.200 user 0m0.059s 00:08:02.200 sys 0m0.072s 00:08:02.200 21:59:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:02.200 21:59:44 -- common/autotest_common.sh@10 -- # set +x 00:08:02.200 ************************************ 00:08:02.200 END TEST version 00:08:02.200 ************************************ 00:08:02.200 21:59:44 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:08:02.200 21:59:44 -- spdk/autotest.sh@194 -- # uname -s 00:08:02.200 21:59:44 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:02.200 21:59:44 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:02.200 21:59:44 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:02.200 21:59:44 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:02.200 21:59:44 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:08:02.200 21:59:44 -- spdk/autotest.sh@258 -- # timing_exit lib 00:08:02.200 21:59:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:02.200 21:59:44 -- common/autotest_common.sh@10 -- # set +x 00:08:02.200 21:59:44 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:08:02.200 21:59:44 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:08:02.200 21:59:44 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:08:02.200 21:59:44 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:08:02.200 21:59:44 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:08:02.200 21:59:44 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:08:02.200 21:59:44 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:02.200 21:59:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:02.200 21:59:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.200 21:59:44 -- common/autotest_common.sh@10 -- # set +x 00:08:02.458 ************************************ 00:08:02.458 START TEST nvmf_tcp 00:08:02.459 ************************************ 00:08:02.459 21:59:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:02.459 * Looking for test storage... 00:08:02.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:02.459 21:59:44 -- nvmf/nvmf.sh@10 -- # uname -s 00:08:02.459 21:59:44 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:02.459 21:59:44 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:02.459 21:59:44 -- nvmf/common.sh@7 -- # uname -s 00:08:02.459 21:59:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.459 21:59:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.459 21:59:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.459 21:59:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.459 21:59:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.459 21:59:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.459 21:59:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.459 21:59:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.459 21:59:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.459 21:59:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.459 21:59:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:02.459 21:59:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:02.459 21:59:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.459 21:59:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.459 21:59:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:02.459 21:59:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.459 21:59:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:02.459 21:59:44 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.459 21:59:44 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.459 21:59:44 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.459 21:59:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.459 21:59:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.459 21:59:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.459 21:59:44 -- paths/export.sh@5 -- # export PATH 00:08:02.459 21:59:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.459 21:59:44 -- nvmf/common.sh@47 -- # : 0 00:08:02.459 21:59:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:02.459 21:59:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:02.459 21:59:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.459 21:59:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.459 21:59:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.459 21:59:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:02.459 21:59:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:02.459 21:59:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:02.459 21:59:44 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:02.459 21:59:44 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:02.459 21:59:44 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:02.459 21:59:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:02.459 21:59:44 -- common/autotest_common.sh@10 -- # set +x 00:08:02.459 21:59:44 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:02.459 21:59:44 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:02.459 21:59:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:02.459 21:59:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.459 21:59:44 -- common/autotest_common.sh@10 -- # set +x 00:08:02.459 ************************************ 00:08:02.459 START TEST nvmf_example 00:08:02.459 ************************************ 00:08:02.459 21:59:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:02.719 * Looking for test storage... 00:08:02.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.719 21:59:44 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:02.719 21:59:44 -- nvmf/common.sh@7 -- # uname -s 00:08:02.719 21:59:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.719 21:59:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.719 21:59:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.719 21:59:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.719 21:59:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.719 21:59:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.719 21:59:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.719 21:59:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.719 21:59:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.719 21:59:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.719 21:59:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:02.719 21:59:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:02.719 21:59:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.719 21:59:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.719 21:59:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:02.719 21:59:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.719 21:59:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:02.719 21:59:44 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.719 21:59:44 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.719 21:59:44 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.719 21:59:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.719 21:59:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.719 21:59:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.719 21:59:44 -- paths/export.sh@5 -- # export PATH 00:08:02.719 21:59:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.719 21:59:44 -- nvmf/common.sh@47 -- # : 0 00:08:02.719 21:59:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:02.719 21:59:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:02.719 21:59:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.719 21:59:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.719 21:59:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.719 21:59:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:02.719 21:59:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:02.719 21:59:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:02.719 21:59:44 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:02.719 21:59:44 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:02.719 21:59:44 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:02.719 21:59:44 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:02.719 21:59:44 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:02.719 21:59:44 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:02.719 21:59:44 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:02.719 21:59:44 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:02.719 21:59:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:02.719 21:59:44 -- common/autotest_common.sh@10 -- # set +x 00:08:02.719 21:59:44 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:02.719 21:59:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:02.719 21:59:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.719 21:59:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:02.719 21:59:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:02.719 21:59:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:02.719 21:59:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.719 21:59:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.719 21:59:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.719 21:59:44 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:02.719 21:59:44 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:02.719 21:59:44 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:02.719 21:59:44 -- common/autotest_common.sh@10 -- # set +x 00:08:05.250 21:59:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:05.250 21:59:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:05.250 21:59:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:05.250 21:59:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:05.250 21:59:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:05.250 21:59:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:05.250 21:59:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:05.250 21:59:47 -- nvmf/common.sh@295 -- # net_devs=() 00:08:05.250 21:59:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:05.250 21:59:47 -- nvmf/common.sh@296 -- # e810=() 00:08:05.250 21:59:47 -- nvmf/common.sh@296 -- # local -ga e810 00:08:05.250 21:59:47 -- nvmf/common.sh@297 -- # x722=() 00:08:05.250 21:59:47 -- nvmf/common.sh@297 -- # local -ga x722 00:08:05.250 21:59:47 -- nvmf/common.sh@298 -- # mlx=() 00:08:05.250 21:59:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:05.250 21:59:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:05.250 21:59:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:05.250 21:59:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:05.250 21:59:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:05.250 21:59:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:05.250 21:59:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:05.250 21:59:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:05.250 21:59:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:05.250 21:59:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:05.250 21:59:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:05.250 21:59:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:05.250 21:59:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:05.250 21:59:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:05.250 21:59:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:05.250 21:59:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:05.250 21:59:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:05.250 21:59:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:05.250 21:59:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:05.250 21:59:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:05.250 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:05.250 21:59:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:05.250 21:59:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:05.250 21:59:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.250 21:59:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.250 21:59:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:05.250 21:59:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:05.250 21:59:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:05.250 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:05.250 21:59:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:05.250 21:59:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:05.250 21:59:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.250 21:59:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.250 21:59:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:05.250 21:59:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:05.250 21:59:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:05.250 21:59:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:05.250 21:59:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:05.250 21:59:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.250 21:59:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:05.251 21:59:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.251 21:59:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:05.251 Found net devices under 0000:84:00.0: cvl_0_0 00:08:05.251 21:59:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.251 21:59:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:05.251 21:59:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.251 21:59:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:05.251 21:59:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.251 21:59:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:05.251 Found net devices under 0000:84:00.1: cvl_0_1 00:08:05.251 21:59:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.251 21:59:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:05.251 21:59:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:05.251 21:59:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:05.251 21:59:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:05.251 21:59:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:05.251 21:59:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:05.251 21:59:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:05.251 21:59:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:05.251 21:59:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:05.251 21:59:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:05.251 21:59:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:05.251 21:59:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:05.251 21:59:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:05.251 21:59:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:05.251 21:59:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:05.251 21:59:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:05.251 21:59:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:05.251 21:59:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:05.251 21:59:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:05.251 21:59:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:05.251 21:59:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:05.251 21:59:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:05.251 21:59:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:05.251 21:59:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:05.251 21:59:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:05.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:05.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:08:05.251 00:08:05.251 --- 10.0.0.2 ping statistics --- 00:08:05.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.251 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:08:05.251 21:59:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:05.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:05.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:08:05.251 00:08:05.251 --- 10.0.0.1 ping statistics --- 00:08:05.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.251 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:08:05.251 21:59:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:05.251 21:59:47 -- nvmf/common.sh@411 -- # return 0 00:08:05.251 21:59:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:05.251 21:59:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:05.251 21:59:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:05.251 21:59:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:05.251 21:59:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:05.251 21:59:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:05.251 21:59:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:05.251 21:59:47 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:05.251 21:59:47 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:05.251 21:59:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:05.251 21:59:47 -- common/autotest_common.sh@10 -- # set +x 00:08:05.251 21:59:47 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:05.251 21:59:47 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:05.251 21:59:47 -- target/nvmf_example.sh@34 -- # nvmfpid=3853604 00:08:05.251 21:59:47 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:05.251 21:59:47 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:05.251 21:59:47 -- target/nvmf_example.sh@36 -- # waitforlisten 3853604 00:08:05.251 21:59:47 -- common/autotest_common.sh@817 -- # '[' -z 3853604 ']' 00:08:05.251 21:59:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.251 21:59:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:05.251 21:59:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.251 21:59:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:05.251 21:59:47 -- common/autotest_common.sh@10 -- # set +x 00:08:05.251 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.510 21:59:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:05.510 21:59:47 -- common/autotest_common.sh@850 -- # return 0 00:08:05.510 21:59:47 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:05.510 21:59:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:05.510 21:59:47 -- common/autotest_common.sh@10 -- # set +x 00:08:05.510 21:59:47 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:05.510 21:59:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.510 21:59:47 -- common/autotest_common.sh@10 -- # set +x 00:08:05.510 21:59:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.510 21:59:47 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:05.510 21:59:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.510 21:59:47 -- common/autotest_common.sh@10 -- # set +x 00:08:05.768 21:59:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.768 21:59:47 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:05.768 21:59:47 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:05.768 21:59:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.768 21:59:47 -- common/autotest_common.sh@10 -- # set +x 00:08:05.768 21:59:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.768 21:59:47 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:05.768 21:59:47 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:05.768 21:59:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.768 21:59:47 -- common/autotest_common.sh@10 -- # set +x 00:08:05.768 21:59:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.769 21:59:47 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:05.769 21:59:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.769 21:59:47 -- common/autotest_common.sh@10 -- # set +x 00:08:05.769 21:59:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.769 21:59:47 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:05.769 21:59:47 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:05.769 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.976 Initializing NVMe Controllers 00:08:17.976 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:17.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:17.976 Initialization complete. Launching workers. 00:08:17.976 ======================================================== 00:08:17.976 Latency(us) 00:08:17.976 Device Information : IOPS MiB/s Average min max 00:08:17.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14054.43 54.90 4553.42 856.58 16429.41 00:08:17.976 ======================================================== 00:08:17.976 Total : 14054.43 54.90 4553.42 856.58 16429.41 00:08:17.976 00:08:17.976 21:59:58 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:17.976 21:59:58 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:17.976 21:59:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:17.976 21:59:58 -- nvmf/common.sh@117 -- # sync 00:08:17.976 21:59:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:17.976 21:59:58 -- nvmf/common.sh@120 -- # set +e 00:08:17.976 21:59:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:17.976 21:59:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:17.976 rmmod nvme_tcp 00:08:17.976 rmmod nvme_fabrics 00:08:17.976 rmmod nvme_keyring 00:08:17.976 21:59:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:17.976 21:59:58 -- nvmf/common.sh@124 -- # set -e 00:08:17.976 21:59:58 -- nvmf/common.sh@125 -- # return 0 00:08:17.976 21:59:58 -- nvmf/common.sh@478 -- # '[' -n 3853604 ']' 00:08:17.976 21:59:58 -- nvmf/common.sh@479 -- # killprocess 3853604 00:08:17.976 21:59:58 -- common/autotest_common.sh@936 -- # '[' -z 3853604 ']' 00:08:17.976 21:59:58 -- common/autotest_common.sh@940 -- # kill -0 3853604 00:08:17.976 21:59:58 -- common/autotest_common.sh@941 -- # uname 00:08:17.976 21:59:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:17.976 21:59:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3853604 00:08:17.976 21:59:58 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:08:17.976 21:59:58 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:08:17.976 21:59:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3853604' 00:08:17.976 killing process with pid 3853604 00:08:17.976 21:59:58 -- common/autotest_common.sh@955 -- # kill 3853604 00:08:17.976 21:59:58 -- common/autotest_common.sh@960 -- # wait 3853604 00:08:17.976 nvmf threads initialize successfully 00:08:17.976 bdev subsystem init successfully 00:08:17.976 created a nvmf target service 00:08:17.976 create targets's poll groups done 00:08:17.976 all subsystems of target started 00:08:17.976 nvmf target is running 00:08:17.976 all subsystems of target stopped 00:08:17.976 destroy targets's poll groups done 00:08:17.976 destroyed the nvmf target service 00:08:17.976 bdev subsystem finish successfully 00:08:17.976 nvmf threads destroy successfully 00:08:17.976 21:59:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:17.976 21:59:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:17.976 21:59:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:17.976 21:59:58 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:17.976 21:59:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:17.976 21:59:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.976 21:59:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:17.976 21:59:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.234 22:00:00 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:18.234 22:00:00 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:18.234 22:00:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:18.234 22:00:00 -- common/autotest_common.sh@10 -- # set +x 00:08:18.234 00:08:18.234 real 0m15.758s 00:08:18.234 user 0m42.187s 00:08:18.234 sys 0m3.859s 00:08:18.234 22:00:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:18.234 22:00:00 -- common/autotest_common.sh@10 -- # set +x 00:08:18.234 ************************************ 00:08:18.234 END TEST nvmf_example 00:08:18.234 ************************************ 00:08:18.234 22:00:00 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:18.234 22:00:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:18.234 22:00:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.234 22:00:00 -- common/autotest_common.sh@10 -- # set +x 00:08:18.495 ************************************ 00:08:18.495 START TEST nvmf_filesystem 00:08:18.495 ************************************ 00:08:18.495 22:00:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:18.495 * Looking for test storage... 00:08:18.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:18.495 22:00:00 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:18.495 22:00:00 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:18.495 22:00:00 -- common/autotest_common.sh@34 -- # set -e 00:08:18.495 22:00:00 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:18.495 22:00:00 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:18.495 22:00:00 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:18.495 22:00:00 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:18.495 22:00:00 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:18.495 22:00:00 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:18.495 22:00:00 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:18.495 22:00:00 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:18.495 22:00:00 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:18.495 22:00:00 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:18.495 22:00:00 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:18.495 22:00:00 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:18.495 22:00:00 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:18.495 22:00:00 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:18.495 22:00:00 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:18.495 22:00:00 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:18.495 22:00:00 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:18.495 22:00:00 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:18.495 22:00:00 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:18.495 22:00:00 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:18.495 22:00:00 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:18.495 22:00:00 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:18.495 22:00:00 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:18.495 22:00:00 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:18.495 22:00:00 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:18.495 22:00:00 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:18.495 22:00:00 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:18.495 22:00:00 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:18.495 22:00:00 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:18.495 22:00:00 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:18.495 22:00:00 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:18.495 22:00:00 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:18.495 22:00:00 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:18.495 22:00:00 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:18.495 22:00:00 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:18.495 22:00:00 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:18.495 22:00:00 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:18.495 22:00:00 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:18.495 22:00:00 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:18.495 22:00:00 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:18.495 22:00:00 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:18.495 22:00:00 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:18.495 22:00:00 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:18.495 22:00:00 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:18.495 22:00:00 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:18.495 22:00:00 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:18.495 22:00:00 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:18.495 22:00:00 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:18.495 22:00:00 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:18.495 22:00:00 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:18.495 22:00:00 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:18.495 22:00:00 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:18.495 22:00:00 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:18.495 22:00:00 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:18.495 22:00:00 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:18.495 22:00:00 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:08:18.495 22:00:00 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:18.495 22:00:00 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:08:18.495 22:00:00 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:08:18.495 22:00:00 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:08:18.495 22:00:00 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:08:18.495 22:00:00 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:08:18.495 22:00:00 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:08:18.495 22:00:00 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:08:18.495 22:00:00 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:08:18.495 22:00:00 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:08:18.495 22:00:00 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:08:18.495 22:00:00 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:08:18.495 22:00:00 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:08:18.495 22:00:00 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:08:18.495 22:00:00 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:08:18.495 22:00:00 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:08:18.495 22:00:00 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:18.495 22:00:00 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:08:18.495 22:00:00 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:08:18.495 22:00:00 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:08:18.495 22:00:00 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:08:18.495 22:00:00 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:08:18.495 22:00:00 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:08:18.495 22:00:00 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:08:18.495 22:00:00 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:08:18.495 22:00:00 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:08:18.495 22:00:00 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:08:18.495 22:00:00 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:08:18.495 22:00:00 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:18.495 22:00:00 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:08:18.495 22:00:00 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:08:18.495 22:00:00 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:18.495 22:00:00 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:18.495 22:00:00 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:18.495 22:00:00 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:18.495 22:00:00 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:18.495 22:00:00 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:18.495 22:00:00 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:18.495 22:00:00 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:18.495 22:00:00 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:18.495 22:00:00 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:18.495 22:00:00 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:18.495 22:00:00 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:18.495 22:00:00 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:18.495 22:00:00 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:18.495 22:00:00 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:18.495 22:00:00 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:18.495 #define SPDK_CONFIG_H 00:08:18.495 #define SPDK_CONFIG_APPS 1 00:08:18.495 #define SPDK_CONFIG_ARCH native 00:08:18.495 #undef SPDK_CONFIG_ASAN 00:08:18.495 #undef SPDK_CONFIG_AVAHI 00:08:18.496 #undef SPDK_CONFIG_CET 00:08:18.496 #define SPDK_CONFIG_COVERAGE 1 00:08:18.496 #define SPDK_CONFIG_CROSS_PREFIX 00:08:18.496 #undef SPDK_CONFIG_CRYPTO 00:08:18.496 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:18.496 #undef SPDK_CONFIG_CUSTOMOCF 00:08:18.496 #undef SPDK_CONFIG_DAOS 00:08:18.496 #define SPDK_CONFIG_DAOS_DIR 00:08:18.496 #define SPDK_CONFIG_DEBUG 1 00:08:18.496 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:18.496 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:18.496 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:18.496 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:18.496 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:18.496 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:18.496 #define SPDK_CONFIG_EXAMPLES 1 00:08:18.496 #undef SPDK_CONFIG_FC 00:08:18.496 #define SPDK_CONFIG_FC_PATH 00:08:18.496 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:18.496 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:18.496 #undef SPDK_CONFIG_FUSE 00:08:18.496 #undef SPDK_CONFIG_FUZZER 00:08:18.496 #define SPDK_CONFIG_FUZZER_LIB 00:08:18.496 #undef SPDK_CONFIG_GOLANG 00:08:18.496 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:18.496 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:18.496 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:18.496 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:08:18.496 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:18.496 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:18.496 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:18.496 #define SPDK_CONFIG_IDXD 1 00:08:18.496 #undef SPDK_CONFIG_IDXD_KERNEL 00:08:18.496 #undef SPDK_CONFIG_IPSEC_MB 00:08:18.496 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:18.496 #define SPDK_CONFIG_ISAL 1 00:08:18.496 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:18.496 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:18.496 #define SPDK_CONFIG_LIBDIR 00:08:18.496 #undef SPDK_CONFIG_LTO 00:08:18.496 #define SPDK_CONFIG_MAX_LCORES 00:08:18.496 #define SPDK_CONFIG_NVME_CUSE 1 00:08:18.496 #undef SPDK_CONFIG_OCF 00:08:18.496 #define SPDK_CONFIG_OCF_PATH 00:08:18.496 #define SPDK_CONFIG_OPENSSL_PATH 00:08:18.496 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:18.496 #define SPDK_CONFIG_PGO_DIR 00:08:18.496 #undef SPDK_CONFIG_PGO_USE 00:08:18.496 #define SPDK_CONFIG_PREFIX /usr/local 00:08:18.496 #undef SPDK_CONFIG_RAID5F 00:08:18.496 #undef SPDK_CONFIG_RBD 00:08:18.496 #define SPDK_CONFIG_RDMA 1 00:08:18.496 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:18.496 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:18.496 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:18.496 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:18.496 #define SPDK_CONFIG_SHARED 1 00:08:18.496 #undef SPDK_CONFIG_SMA 00:08:18.496 #define SPDK_CONFIG_TESTS 1 00:08:18.496 #undef SPDK_CONFIG_TSAN 00:08:18.496 #define SPDK_CONFIG_UBLK 1 00:08:18.496 #define SPDK_CONFIG_UBSAN 1 00:08:18.496 #undef SPDK_CONFIG_UNIT_TESTS 00:08:18.496 #undef SPDK_CONFIG_URING 00:08:18.496 #define SPDK_CONFIG_URING_PATH 00:08:18.496 #undef SPDK_CONFIG_URING_ZNS 00:08:18.496 #undef SPDK_CONFIG_USDT 00:08:18.496 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:18.496 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:18.496 #define SPDK_CONFIG_VFIO_USER 1 00:08:18.496 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:18.496 #define SPDK_CONFIG_VHOST 1 00:08:18.496 #define SPDK_CONFIG_VIRTIO 1 00:08:18.496 #undef SPDK_CONFIG_VTUNE 00:08:18.496 #define SPDK_CONFIG_VTUNE_DIR 00:08:18.496 #define SPDK_CONFIG_WERROR 1 00:08:18.496 #define SPDK_CONFIG_WPDK_DIR 00:08:18.496 #undef SPDK_CONFIG_XNVME 00:08:18.496 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:18.496 22:00:00 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:18.496 22:00:00 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:18.496 22:00:00 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.496 22:00:00 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.496 22:00:00 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.496 22:00:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.496 22:00:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.496 22:00:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.496 22:00:00 -- paths/export.sh@5 -- # export PATH 00:08:18.496 22:00:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.496 22:00:00 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:18.496 22:00:00 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:18.496 22:00:00 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:18.496 22:00:00 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:18.496 22:00:00 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:18.496 22:00:00 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:18.496 22:00:00 -- pm/common@67 -- # TEST_TAG=N/A 00:08:18.496 22:00:00 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:18.496 22:00:00 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:18.496 22:00:00 -- pm/common@71 -- # uname -s 00:08:18.496 22:00:00 -- pm/common@71 -- # PM_OS=Linux 00:08:18.496 22:00:00 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:18.496 22:00:00 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:08:18.496 22:00:00 -- pm/common@76 -- # [[ Linux == Linux ]] 00:08:18.496 22:00:00 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:08:18.496 22:00:00 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:08:18.496 22:00:00 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:18.496 22:00:00 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:18.496 22:00:00 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:08:18.496 22:00:00 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:08:18.496 22:00:00 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:18.496 22:00:00 -- common/autotest_common.sh@57 -- # : 0 00:08:18.496 22:00:00 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:08:18.496 22:00:00 -- common/autotest_common.sh@61 -- # : 0 00:08:18.496 22:00:00 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:18.496 22:00:00 -- common/autotest_common.sh@63 -- # : 0 00:08:18.496 22:00:00 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:08:18.496 22:00:00 -- common/autotest_common.sh@65 -- # : 1 00:08:18.496 22:00:00 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:18.496 22:00:00 -- common/autotest_common.sh@67 -- # : 0 00:08:18.496 22:00:00 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:08:18.496 22:00:00 -- common/autotest_common.sh@69 -- # : 00:08:18.496 22:00:00 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:08:18.496 22:00:00 -- common/autotest_common.sh@71 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:08:18.497 22:00:00 -- common/autotest_common.sh@73 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:08:18.497 22:00:00 -- common/autotest_common.sh@75 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:08:18.497 22:00:00 -- common/autotest_common.sh@77 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:18.497 22:00:00 -- common/autotest_common.sh@79 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:08:18.497 22:00:00 -- common/autotest_common.sh@81 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:08:18.497 22:00:00 -- common/autotest_common.sh@83 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:08:18.497 22:00:00 -- common/autotest_common.sh@85 -- # : 1 00:08:18.497 22:00:00 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:08:18.497 22:00:00 -- common/autotest_common.sh@87 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:08:18.497 22:00:00 -- common/autotest_common.sh@89 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:08:18.497 22:00:00 -- common/autotest_common.sh@91 -- # : 1 00:08:18.497 22:00:00 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:08:18.497 22:00:00 -- common/autotest_common.sh@93 -- # : 1 00:08:18.497 22:00:00 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:08:18.497 22:00:00 -- common/autotest_common.sh@95 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:18.497 22:00:00 -- common/autotest_common.sh@97 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:08:18.497 22:00:00 -- common/autotest_common.sh@99 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:08:18.497 22:00:00 -- common/autotest_common.sh@101 -- # : tcp 00:08:18.497 22:00:00 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:18.497 22:00:00 -- common/autotest_common.sh@103 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:08:18.497 22:00:00 -- common/autotest_common.sh@105 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:08:18.497 22:00:00 -- common/autotest_common.sh@107 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:08:18.497 22:00:00 -- common/autotest_common.sh@109 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:08:18.497 22:00:00 -- common/autotest_common.sh@111 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:08:18.497 22:00:00 -- common/autotest_common.sh@113 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:08:18.497 22:00:00 -- common/autotest_common.sh@115 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:08:18.497 22:00:00 -- common/autotest_common.sh@117 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:18.497 22:00:00 -- common/autotest_common.sh@119 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:08:18.497 22:00:00 -- common/autotest_common.sh@121 -- # : 1 00:08:18.497 22:00:00 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:08:18.497 22:00:00 -- common/autotest_common.sh@123 -- # : 00:08:18.497 22:00:00 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:18.497 22:00:00 -- common/autotest_common.sh@125 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:08:18.497 22:00:00 -- common/autotest_common.sh@127 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:08:18.497 22:00:00 -- common/autotest_common.sh@129 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:08:18.497 22:00:00 -- common/autotest_common.sh@131 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:08:18.497 22:00:00 -- common/autotest_common.sh@133 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:08:18.497 22:00:00 -- common/autotest_common.sh@135 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:08:18.497 22:00:00 -- common/autotest_common.sh@137 -- # : 00:08:18.497 22:00:00 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:08:18.497 22:00:00 -- common/autotest_common.sh@139 -- # : true 00:08:18.497 22:00:00 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:08:18.497 22:00:00 -- common/autotest_common.sh@141 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:08:18.497 22:00:00 -- common/autotest_common.sh@143 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:08:18.497 22:00:00 -- common/autotest_common.sh@145 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:08:18.497 22:00:00 -- common/autotest_common.sh@147 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:08:18.497 22:00:00 -- common/autotest_common.sh@149 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:08:18.497 22:00:00 -- common/autotest_common.sh@151 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:08:18.497 22:00:00 -- common/autotest_common.sh@153 -- # : e810 00:08:18.497 22:00:00 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:08:18.497 22:00:00 -- common/autotest_common.sh@155 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:08:18.497 22:00:00 -- common/autotest_common.sh@157 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:08:18.497 22:00:00 -- common/autotest_common.sh@159 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:08:18.497 22:00:00 -- common/autotest_common.sh@161 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:08:18.497 22:00:00 -- common/autotest_common.sh@163 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:08:18.497 22:00:00 -- common/autotest_common.sh@166 -- # : 00:08:18.497 22:00:00 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:08:18.497 22:00:00 -- common/autotest_common.sh@168 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:08:18.497 22:00:00 -- common/autotest_common.sh@170 -- # : 0 00:08:18.497 22:00:00 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:18.497 22:00:00 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:18.497 22:00:00 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:18.497 22:00:00 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:18.497 22:00:00 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:18.497 22:00:00 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:18.497 22:00:00 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:18.497 22:00:00 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:18.497 22:00:00 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:18.497 22:00:00 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:18.497 22:00:00 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:18.498 22:00:00 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:18.498 22:00:00 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:18.498 22:00:00 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:18.498 22:00:00 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:08:18.498 22:00:00 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:18.498 22:00:00 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:18.498 22:00:00 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:18.498 22:00:00 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:18.498 22:00:00 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:18.498 22:00:00 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:08:18.498 22:00:00 -- common/autotest_common.sh@199 -- # cat 00:08:18.498 22:00:00 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:08:18.498 22:00:00 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:18.498 22:00:00 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:18.498 22:00:00 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:18.498 22:00:00 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:18.498 22:00:00 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:08:18.498 22:00:00 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:08:18.498 22:00:00 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:18.498 22:00:00 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:18.498 22:00:00 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:18.498 22:00:00 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:18.498 22:00:00 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:18.498 22:00:00 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:18.498 22:00:00 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:18.498 22:00:00 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:18.498 22:00:00 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:18.498 22:00:00 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:18.498 22:00:00 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:18.498 22:00:00 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:18.498 22:00:00 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:08:18.498 22:00:00 -- common/autotest_common.sh@252 -- # export valgrind= 00:08:18.498 22:00:00 -- common/autotest_common.sh@252 -- # valgrind= 00:08:18.498 22:00:00 -- common/autotest_common.sh@258 -- # uname -s 00:08:18.498 22:00:00 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:08:18.498 22:00:00 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:08:18.498 22:00:00 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:08:18.498 22:00:00 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:08:18.498 22:00:00 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:08:18.498 22:00:00 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:08:18.498 22:00:00 -- common/autotest_common.sh@268 -- # MAKE=make 00:08:18.498 22:00:00 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j48 00:08:18.498 22:00:00 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:08:18.498 22:00:00 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:08:18.498 22:00:00 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:08:18.498 22:00:00 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:08:18.498 22:00:00 -- common/autotest_common.sh@289 -- # for i in "$@" 00:08:18.498 22:00:00 -- common/autotest_common.sh@290 -- # case "$i" in 00:08:18.498 22:00:00 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:08:18.498 22:00:00 -- common/autotest_common.sh@307 -- # [[ -z 3855344 ]] 00:08:18.498 22:00:00 -- common/autotest_common.sh@307 -- # kill -0 3855344 00:08:18.498 22:00:00 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:08:18.498 22:00:00 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:08:18.498 22:00:00 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:08:18.498 22:00:00 -- common/autotest_common.sh@320 -- # local mount target_dir 00:08:18.498 22:00:00 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:08:18.498 22:00:00 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:08:18.498 22:00:00 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:08:18.498 22:00:00 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:08:18.498 22:00:00 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.wQ447S 00:08:18.498 22:00:00 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:18.498 22:00:00 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:08:18.498 22:00:00 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:08:18.498 22:00:00 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.wQ447S/tests/target /tmp/spdk.wQ447S 00:08:18.498 22:00:00 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:08:18.498 22:00:00 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:18.498 22:00:00 -- common/autotest_common.sh@316 -- # df -T 00:08:18.498 22:00:00 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:08:18.498 22:00:00 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:08:18.498 22:00:00 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:08:18.498 22:00:00 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:08:18.498 22:00:00 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:08:18.498 22:00:00 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:08:18.498 22:00:00 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:18.498 22:00:00 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:08:18.498 22:00:00 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:08:18.498 22:00:00 -- common/autotest_common.sh@351 -- # avails["$mount"]=1052192768 00:08:18.498 22:00:00 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:08:18.498 22:00:00 -- common/autotest_common.sh@352 -- # uses["$mount"]=4232237056 00:08:18.498 22:00:00 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:18.498 22:00:00 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:08:18.498 22:00:00 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:08:18.498 22:00:00 -- common/autotest_common.sh@351 -- # avails["$mount"]=36860669952 00:08:18.498 22:00:00 -- common/autotest_common.sh@351 -- # sizes["$mount"]=45083283456 00:08:18.498 22:00:00 -- common/autotest_common.sh@352 -- # uses["$mount"]=8222613504 00:08:18.498 22:00:00 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:18.498 22:00:00 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:18.498 22:00:00 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:18.498 22:00:00 -- common/autotest_common.sh@351 -- # avails["$mount"]=22540361728 00:08:18.498 22:00:00 -- common/autotest_common.sh@351 -- # sizes["$mount"]=22541639680 00:08:18.498 22:00:00 -- common/autotest_common.sh@352 -- # uses["$mount"]=1277952 00:08:18.498 22:00:00 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:18.498 22:00:00 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:18.498 22:00:00 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:18.498 22:00:00 -- common/autotest_common.sh@351 -- # avails["$mount"]=9007853568 00:08:18.498 22:00:00 -- common/autotest_common.sh@351 -- # sizes["$mount"]=9016659968 00:08:18.498 22:00:00 -- common/autotest_common.sh@352 -- # uses["$mount"]=8806400 00:08:18.498 22:00:00 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:18.498 22:00:00 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:18.498 22:00:00 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:18.498 22:00:00 -- common/autotest_common.sh@351 -- # avails["$mount"]=22541099008 00:08:18.498 22:00:00 -- common/autotest_common.sh@351 -- # sizes["$mount"]=22541643776 00:08:18.499 22:00:00 -- common/autotest_common.sh@352 -- # uses["$mount"]=544768 00:08:18.499 22:00:00 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:18.499 22:00:00 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:18.499 22:00:00 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:18.499 22:00:00 -- common/autotest_common.sh@351 -- # avails["$mount"]=4508323840 00:08:18.499 22:00:00 -- common/autotest_common.sh@351 -- # sizes["$mount"]=4508327936 00:08:18.499 22:00:00 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:08:18.499 22:00:00 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:18.499 22:00:00 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:18.499 22:00:00 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:18.499 22:00:00 -- common/autotest_common.sh@351 -- # avails["$mount"]=4508323840 00:08:18.499 22:00:00 -- common/autotest_common.sh@351 -- # sizes["$mount"]=4508327936 00:08:18.499 22:00:00 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:08:18.499 22:00:00 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:18.499 22:00:00 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:08:18.499 * Looking for test storage... 00:08:18.499 22:00:00 -- common/autotest_common.sh@357 -- # local target_space new_size 00:08:18.499 22:00:00 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:08:18.499 22:00:00 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:18.499 22:00:00 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:18.499 22:00:00 -- common/autotest_common.sh@361 -- # mount=/ 00:08:18.499 22:00:00 -- common/autotest_common.sh@363 -- # target_space=36860669952 00:08:18.499 22:00:00 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:08:18.499 22:00:00 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:08:18.499 22:00:00 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:08:18.499 22:00:00 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:08:18.499 22:00:00 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:08:18.499 22:00:00 -- common/autotest_common.sh@370 -- # new_size=10437206016 00:08:18.499 22:00:00 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:18.499 22:00:00 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:18.499 22:00:00 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:18.499 22:00:00 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:18.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:18.499 22:00:00 -- common/autotest_common.sh@378 -- # return 0 00:08:18.499 22:00:00 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:08:18.499 22:00:00 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:08:18.499 22:00:00 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:18.499 22:00:00 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:18.499 22:00:00 -- common/autotest_common.sh@1673 -- # true 00:08:18.499 22:00:00 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:08:18.499 22:00:00 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:18.499 22:00:00 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:18.499 22:00:00 -- common/autotest_common.sh@27 -- # exec 00:08:18.499 22:00:00 -- common/autotest_common.sh@29 -- # exec 00:08:18.499 22:00:00 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:18.499 22:00:00 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:18.499 22:00:00 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:18.499 22:00:00 -- common/autotest_common.sh@18 -- # set -x 00:08:18.499 22:00:00 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:18.499 22:00:00 -- nvmf/common.sh@7 -- # uname -s 00:08:18.499 22:00:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.499 22:00:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.499 22:00:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.499 22:00:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.499 22:00:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:18.499 22:00:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:18.499 22:00:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.499 22:00:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:18.499 22:00:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.499 22:00:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:18.499 22:00:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:18.499 22:00:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:18.499 22:00:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.499 22:00:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:18.499 22:00:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:18.499 22:00:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:18.499 22:00:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:18.758 22:00:00 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.758 22:00:00 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.758 22:00:00 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.758 22:00:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.758 22:00:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.758 22:00:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.758 22:00:00 -- paths/export.sh@5 -- # export PATH 00:08:18.758 22:00:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.758 22:00:00 -- nvmf/common.sh@47 -- # : 0 00:08:18.758 22:00:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:18.758 22:00:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:18.758 22:00:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:18.758 22:00:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.758 22:00:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.758 22:00:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:18.758 22:00:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:18.758 22:00:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:18.758 22:00:00 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:18.758 22:00:00 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:18.758 22:00:00 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:18.758 22:00:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:18.758 22:00:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:18.758 22:00:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:18.758 22:00:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:18.758 22:00:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:18.758 22:00:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.758 22:00:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.758 22:00:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.758 22:00:00 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:18.758 22:00:00 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:18.758 22:00:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:18.758 22:00:00 -- common/autotest_common.sh@10 -- # set +x 00:08:21.279 22:00:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:21.279 22:00:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:21.279 22:00:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:21.279 22:00:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:21.279 22:00:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:21.279 22:00:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:21.279 22:00:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:21.279 22:00:03 -- nvmf/common.sh@295 -- # net_devs=() 00:08:21.279 22:00:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:21.279 22:00:03 -- nvmf/common.sh@296 -- # e810=() 00:08:21.279 22:00:03 -- nvmf/common.sh@296 -- # local -ga e810 00:08:21.279 22:00:03 -- nvmf/common.sh@297 -- # x722=() 00:08:21.279 22:00:03 -- nvmf/common.sh@297 -- # local -ga x722 00:08:21.279 22:00:03 -- nvmf/common.sh@298 -- # mlx=() 00:08:21.279 22:00:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:21.279 22:00:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.279 22:00:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.279 22:00:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.279 22:00:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.279 22:00:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.279 22:00:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.279 22:00:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.279 22:00:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.279 22:00:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.279 22:00:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.279 22:00:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.279 22:00:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:21.279 22:00:03 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:21.279 22:00:03 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:21.279 22:00:03 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:21.279 22:00:03 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:21.279 22:00:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:21.279 22:00:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.279 22:00:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:21.279 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:21.279 22:00:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.279 22:00:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.279 22:00:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.279 22:00:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.279 22:00:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.279 22:00:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.279 22:00:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:21.279 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:21.279 22:00:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.279 22:00:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.279 22:00:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.279 22:00:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.279 22:00:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.279 22:00:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:21.279 22:00:03 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:21.279 22:00:03 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:21.279 22:00:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.279 22:00:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.279 22:00:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:21.279 22:00:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.279 22:00:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:21.279 Found net devices under 0000:84:00.0: cvl_0_0 00:08:21.279 22:00:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.279 22:00:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.279 22:00:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.279 22:00:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:21.279 22:00:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.279 22:00:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:21.279 Found net devices under 0000:84:00.1: cvl_0_1 00:08:21.279 22:00:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.279 22:00:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:21.279 22:00:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:21.279 22:00:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:21.279 22:00:03 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:21.279 22:00:03 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:21.279 22:00:03 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.279 22:00:03 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.279 22:00:03 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:21.279 22:00:03 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:21.279 22:00:03 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:21.279 22:00:03 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:21.279 22:00:03 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:21.279 22:00:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:21.279 22:00:03 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.279 22:00:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:21.279 22:00:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:21.279 22:00:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:21.279 22:00:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:21.279 22:00:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:21.279 22:00:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:21.279 22:00:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:21.279 22:00:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:21.279 22:00:03 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:21.279 22:00:03 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:21.279 22:00:03 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:21.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:08:21.279 00:08:21.279 --- 10.0.0.2 ping statistics --- 00:08:21.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.279 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:08:21.279 22:00:03 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:21.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:08:21.279 00:08:21.279 --- 10.0.0.1 ping statistics --- 00:08:21.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.279 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:08:21.279 22:00:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.279 22:00:03 -- nvmf/common.sh@411 -- # return 0 00:08:21.279 22:00:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:21.279 22:00:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.279 22:00:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:21.279 22:00:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:21.279 22:00:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.279 22:00:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:21.279 22:00:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:21.279 22:00:03 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:21.279 22:00:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:21.279 22:00:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:21.279 22:00:03 -- common/autotest_common.sh@10 -- # set +x 00:08:21.279 ************************************ 00:08:21.279 START TEST nvmf_filesystem_no_in_capsule 00:08:21.279 ************************************ 00:08:21.279 22:00:03 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:08:21.279 22:00:03 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:21.279 22:00:03 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:21.279 22:00:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:21.280 22:00:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:21.280 22:00:03 -- common/autotest_common.sh@10 -- # set +x 00:08:21.280 22:00:03 -- nvmf/common.sh@470 -- # nvmfpid=3857073 00:08:21.280 22:00:03 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:21.280 22:00:03 -- nvmf/common.sh@471 -- # waitforlisten 3857073 00:08:21.280 22:00:03 -- common/autotest_common.sh@817 -- # '[' -z 3857073 ']' 00:08:21.280 22:00:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.280 22:00:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:21.280 22:00:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.280 22:00:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:21.280 22:00:03 -- common/autotest_common.sh@10 -- # set +x 00:08:21.537 [2024-04-24 22:00:03.536530] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:08:21.537 [2024-04-24 22:00:03.536617] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.537 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.537 [2024-04-24 22:00:03.619772] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:21.537 [2024-04-24 22:00:03.747877] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.537 [2024-04-24 22:00:03.747950] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.537 [2024-04-24 22:00:03.747976] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.537 [2024-04-24 22:00:03.747995] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.537 [2024-04-24 22:00:03.748007] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.537 [2024-04-24 22:00:03.748106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.538 [2024-04-24 22:00:03.748213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.538 [2024-04-24 22:00:03.748285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:21.538 [2024-04-24 22:00:03.748288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.795 22:00:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:21.795 22:00:03 -- common/autotest_common.sh@850 -- # return 0 00:08:21.795 22:00:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:21.795 22:00:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:21.795 22:00:03 -- common/autotest_common.sh@10 -- # set +x 00:08:21.795 22:00:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.795 22:00:04 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:21.795 22:00:04 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:21.795 22:00:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:21.795 22:00:04 -- common/autotest_common.sh@10 -- # set +x 00:08:21.795 [2024-04-24 22:00:04.014775] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.795 22:00:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:21.795 22:00:04 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:21.795 22:00:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:21.795 22:00:04 -- common/autotest_common.sh@10 -- # set +x 00:08:22.052 Malloc1 00:08:22.052 22:00:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:22.052 22:00:04 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:22.052 22:00:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:22.052 22:00:04 -- common/autotest_common.sh@10 -- # set +x 00:08:22.052 22:00:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:22.052 22:00:04 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:22.052 22:00:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:22.052 22:00:04 -- common/autotest_common.sh@10 -- # set +x 00:08:22.052 22:00:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:22.052 22:00:04 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:22.052 22:00:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:22.052 22:00:04 -- common/autotest_common.sh@10 -- # set +x 00:08:22.052 [2024-04-24 22:00:04.195270] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:22.052 [2024-04-24 22:00:04.195597] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.052 22:00:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:22.052 22:00:04 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:22.052 22:00:04 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:08:22.052 22:00:04 -- common/autotest_common.sh@1365 -- # local bdev_info 00:08:22.052 22:00:04 -- common/autotest_common.sh@1366 -- # local bs 00:08:22.052 22:00:04 -- common/autotest_common.sh@1367 -- # local nb 00:08:22.052 22:00:04 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:22.052 22:00:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:22.052 22:00:04 -- common/autotest_common.sh@10 -- # set +x 00:08:22.052 22:00:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:22.053 22:00:04 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:08:22.053 { 00:08:22.053 "name": "Malloc1", 00:08:22.053 "aliases": [ 00:08:22.053 "ababeee3-cab5-49ff-bcb8-f3d54ca04d74" 00:08:22.053 ], 00:08:22.053 "product_name": "Malloc disk", 00:08:22.053 "block_size": 512, 00:08:22.053 "num_blocks": 1048576, 00:08:22.053 "uuid": "ababeee3-cab5-49ff-bcb8-f3d54ca04d74", 00:08:22.053 "assigned_rate_limits": { 00:08:22.053 "rw_ios_per_sec": 0, 00:08:22.053 "rw_mbytes_per_sec": 0, 00:08:22.053 "r_mbytes_per_sec": 0, 00:08:22.053 "w_mbytes_per_sec": 0 00:08:22.053 }, 00:08:22.053 "claimed": true, 00:08:22.053 "claim_type": "exclusive_write", 00:08:22.053 "zoned": false, 00:08:22.053 "supported_io_types": { 00:08:22.053 "read": true, 00:08:22.053 "write": true, 00:08:22.053 "unmap": true, 00:08:22.053 "write_zeroes": true, 00:08:22.053 "flush": true, 00:08:22.053 "reset": true, 00:08:22.053 "compare": false, 00:08:22.053 "compare_and_write": false, 00:08:22.053 "abort": true, 00:08:22.053 "nvme_admin": false, 00:08:22.053 "nvme_io": false 00:08:22.053 }, 00:08:22.053 "memory_domains": [ 00:08:22.053 { 00:08:22.053 "dma_device_id": "system", 00:08:22.053 "dma_device_type": 1 00:08:22.053 }, 00:08:22.053 { 00:08:22.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.053 "dma_device_type": 2 00:08:22.053 } 00:08:22.053 ], 00:08:22.053 "driver_specific": {} 00:08:22.053 } 00:08:22.053 ]' 00:08:22.053 22:00:04 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:08:22.053 22:00:04 -- common/autotest_common.sh@1369 -- # bs=512 00:08:22.053 22:00:04 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:08:22.053 22:00:04 -- common/autotest_common.sh@1370 -- # nb=1048576 00:08:22.053 22:00:04 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:08:22.053 22:00:04 -- common/autotest_common.sh@1374 -- # echo 512 00:08:22.053 22:00:04 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:22.053 22:00:04 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:22.617 22:00:04 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:22.617 22:00:04 -- common/autotest_common.sh@1184 -- # local i=0 00:08:22.617 22:00:04 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:22.617 22:00:04 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:22.617 22:00:04 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:25.141 22:00:06 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:25.141 22:00:06 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:25.141 22:00:06 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:25.141 22:00:06 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:25.141 22:00:06 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:25.141 22:00:06 -- common/autotest_common.sh@1194 -- # return 0 00:08:25.141 22:00:06 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:25.141 22:00:06 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:25.141 22:00:06 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:25.141 22:00:06 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:25.141 22:00:06 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:25.141 22:00:06 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:25.141 22:00:06 -- setup/common.sh@80 -- # echo 536870912 00:08:25.141 22:00:06 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:25.141 22:00:06 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:25.141 22:00:06 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:25.141 22:00:06 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:25.141 22:00:06 -- target/filesystem.sh@69 -- # partprobe 00:08:25.141 22:00:07 -- target/filesystem.sh@70 -- # sleep 1 00:08:26.515 22:00:08 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:26.515 22:00:08 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:26.515 22:00:08 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:26.515 22:00:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:26.515 22:00:08 -- common/autotest_common.sh@10 -- # set +x 00:08:26.515 ************************************ 00:08:26.515 START TEST filesystem_ext4 00:08:26.515 ************************************ 00:08:26.515 22:00:08 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:26.515 22:00:08 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:26.515 22:00:08 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:26.515 22:00:08 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:26.515 22:00:08 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:26.515 22:00:08 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:26.515 22:00:08 -- common/autotest_common.sh@914 -- # local i=0 00:08:26.515 22:00:08 -- common/autotest_common.sh@915 -- # local force 00:08:26.515 22:00:08 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:26.515 22:00:08 -- common/autotest_common.sh@918 -- # force=-F 00:08:26.515 22:00:08 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:26.515 mke2fs 1.46.5 (30-Dec-2021) 00:08:26.515 Discarding device blocks: 0/522240 done 00:08:26.516 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:26.516 Filesystem UUID: b071749c-bb48-4a26-bc53-1c5de9254092 00:08:26.516 Superblock backups stored on blocks: 00:08:26.516 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:26.516 00:08:26.516 Allocating group tables: 0/64 done 00:08:26.516 Writing inode tables: 0/64 done 00:08:26.784 Creating journal (8192 blocks): done 00:08:27.624 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:08:27.624 00:08:27.624 22:00:09 -- common/autotest_common.sh@931 -- # return 0 00:08:27.624 22:00:09 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:27.624 22:00:09 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:27.624 22:00:09 -- target/filesystem.sh@25 -- # sync 00:08:27.624 22:00:09 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:27.624 22:00:09 -- target/filesystem.sh@27 -- # sync 00:08:27.624 22:00:09 -- target/filesystem.sh@29 -- # i=0 00:08:27.624 22:00:09 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:27.890 22:00:09 -- target/filesystem.sh@37 -- # kill -0 3857073 00:08:27.890 22:00:09 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:27.890 22:00:09 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:27.890 22:00:09 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:27.890 22:00:09 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:27.890 00:08:27.890 real 0m1.398s 00:08:27.890 user 0m0.019s 00:08:27.890 sys 0m0.032s 00:08:27.890 22:00:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:27.890 22:00:09 -- common/autotest_common.sh@10 -- # set +x 00:08:27.891 ************************************ 00:08:27.891 END TEST filesystem_ext4 00:08:27.891 ************************************ 00:08:27.891 22:00:09 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:27.891 22:00:09 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:27.891 22:00:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:27.891 22:00:09 -- common/autotest_common.sh@10 -- # set +x 00:08:27.891 ************************************ 00:08:27.891 START TEST filesystem_btrfs 00:08:27.891 ************************************ 00:08:27.891 22:00:10 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:27.891 22:00:10 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:27.891 22:00:10 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:27.891 22:00:10 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:27.891 22:00:10 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:27.891 22:00:10 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:27.891 22:00:10 -- common/autotest_common.sh@914 -- # local i=0 00:08:27.891 22:00:10 -- common/autotest_common.sh@915 -- # local force 00:08:27.891 22:00:10 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:27.891 22:00:10 -- common/autotest_common.sh@920 -- # force=-f 00:08:27.891 22:00:10 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:28.157 btrfs-progs v6.6.2 00:08:28.157 See https://btrfs.readthedocs.io for more information. 00:08:28.157 00:08:28.157 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:28.157 NOTE: several default settings have changed in version 5.15, please make sure 00:08:28.157 this does not affect your deployments: 00:08:28.157 - DUP for metadata (-m dup) 00:08:28.157 - enabled no-holes (-O no-holes) 00:08:28.157 - enabled free-space-tree (-R free-space-tree) 00:08:28.157 00:08:28.157 Label: (null) 00:08:28.157 UUID: 7d26b2c7-b7db-4856-a18f-5424a7b7e8b4 00:08:28.157 Node size: 16384 00:08:28.157 Sector size: 4096 00:08:28.157 Filesystem size: 510.00MiB 00:08:28.157 Block group profiles: 00:08:28.157 Data: single 8.00MiB 00:08:28.157 Metadata: DUP 32.00MiB 00:08:28.157 System: DUP 8.00MiB 00:08:28.157 SSD detected: yes 00:08:28.157 Zoned device: no 00:08:28.157 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:28.157 Runtime features: free-space-tree 00:08:28.157 Checksum: crc32c 00:08:28.157 Number of devices: 1 00:08:28.157 Devices: 00:08:28.157 ID SIZE PATH 00:08:28.157 1 510.00MiB /dev/nvme0n1p1 00:08:28.157 00:08:28.158 22:00:10 -- common/autotest_common.sh@931 -- # return 0 00:08:28.158 22:00:10 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:29.159 22:00:11 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:29.159 22:00:11 -- target/filesystem.sh@25 -- # sync 00:08:29.159 22:00:11 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:29.159 22:00:11 -- target/filesystem.sh@27 -- # sync 00:08:29.159 22:00:11 -- target/filesystem.sh@29 -- # i=0 00:08:29.159 22:00:11 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:29.159 22:00:11 -- target/filesystem.sh@37 -- # kill -0 3857073 00:08:29.159 22:00:11 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:29.159 22:00:11 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:29.159 22:00:11 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:29.159 22:00:11 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:29.160 00:08:29.160 real 0m1.021s 00:08:29.160 user 0m0.015s 00:08:29.160 sys 0m0.049s 00:08:29.160 22:00:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:29.160 22:00:11 -- common/autotest_common.sh@10 -- # set +x 00:08:29.160 ************************************ 00:08:29.160 END TEST filesystem_btrfs 00:08:29.160 ************************************ 00:08:29.160 22:00:11 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:29.160 22:00:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:29.160 22:00:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:29.160 22:00:11 -- common/autotest_common.sh@10 -- # set +x 00:08:29.160 ************************************ 00:08:29.160 START TEST filesystem_xfs 00:08:29.160 ************************************ 00:08:29.160 22:00:11 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:08:29.160 22:00:11 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:29.160 22:00:11 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:29.160 22:00:11 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:29.160 22:00:11 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:29.160 22:00:11 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:29.160 22:00:11 -- common/autotest_common.sh@914 -- # local i=0 00:08:29.160 22:00:11 -- common/autotest_common.sh@915 -- # local force 00:08:29.160 22:00:11 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:29.160 22:00:11 -- common/autotest_common.sh@920 -- # force=-f 00:08:29.160 22:00:11 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:29.160 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:29.160 = sectsz=512 attr=2, projid32bit=1 00:08:29.160 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:29.160 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:29.160 data = bsize=4096 blocks=130560, imaxpct=25 00:08:29.160 = sunit=0 swidth=0 blks 00:08:29.160 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:29.160 log =internal log bsize=4096 blocks=16384, version=2 00:08:29.160 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:29.160 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:30.122 Discarding blocks...Done. 00:08:30.122 22:00:12 -- common/autotest_common.sh@931 -- # return 0 00:08:30.122 22:00:12 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:32.678 22:00:14 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:32.678 22:00:14 -- target/filesystem.sh@25 -- # sync 00:08:32.678 22:00:14 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:32.678 22:00:14 -- target/filesystem.sh@27 -- # sync 00:08:32.678 22:00:14 -- target/filesystem.sh@29 -- # i=0 00:08:32.678 22:00:14 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:32.678 22:00:14 -- target/filesystem.sh@37 -- # kill -0 3857073 00:08:32.678 22:00:14 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:32.678 22:00:14 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:32.678 22:00:14 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:32.678 22:00:14 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:32.678 00:08:32.678 real 0m3.320s 00:08:32.678 user 0m0.013s 00:08:32.678 sys 0m0.046s 00:08:32.678 22:00:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:32.678 22:00:14 -- common/autotest_common.sh@10 -- # set +x 00:08:32.678 ************************************ 00:08:32.678 END TEST filesystem_xfs 00:08:32.678 ************************************ 00:08:32.678 22:00:14 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:32.678 22:00:14 -- target/filesystem.sh@93 -- # sync 00:08:32.678 22:00:14 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:32.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.678 22:00:14 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:32.678 22:00:14 -- common/autotest_common.sh@1205 -- # local i=0 00:08:32.678 22:00:14 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:32.678 22:00:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:32.678 22:00:14 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:32.678 22:00:14 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:32.678 22:00:14 -- common/autotest_common.sh@1217 -- # return 0 00:08:32.678 22:00:14 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:32.678 22:00:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.678 22:00:14 -- common/autotest_common.sh@10 -- # set +x 00:08:32.678 22:00:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.678 22:00:14 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:32.678 22:00:14 -- target/filesystem.sh@101 -- # killprocess 3857073 00:08:32.678 22:00:14 -- common/autotest_common.sh@936 -- # '[' -z 3857073 ']' 00:08:32.678 22:00:14 -- common/autotest_common.sh@940 -- # kill -0 3857073 00:08:32.678 22:00:14 -- common/autotest_common.sh@941 -- # uname 00:08:32.678 22:00:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:32.678 22:00:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3857073 00:08:32.678 22:00:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:32.678 22:00:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:32.678 22:00:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3857073' 00:08:32.678 killing process with pid 3857073 00:08:32.678 22:00:14 -- common/autotest_common.sh@955 -- # kill 3857073 00:08:32.678 [2024-04-24 22:00:14.750616] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:32.678 22:00:14 -- common/autotest_common.sh@960 -- # wait 3857073 00:08:33.243 22:00:15 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:33.243 00:08:33.243 real 0m11.788s 00:08:33.243 user 0m45.378s 00:08:33.243 sys 0m1.727s 00:08:33.243 22:00:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:33.243 22:00:15 -- common/autotest_common.sh@10 -- # set +x 00:08:33.243 ************************************ 00:08:33.243 END TEST nvmf_filesystem_no_in_capsule 00:08:33.243 ************************************ 00:08:33.243 22:00:15 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:33.243 22:00:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:33.243 22:00:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:33.243 22:00:15 -- common/autotest_common.sh@10 -- # set +x 00:08:33.243 ************************************ 00:08:33.243 START TEST nvmf_filesystem_in_capsule 00:08:33.243 ************************************ 00:08:33.243 22:00:15 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:08:33.243 22:00:15 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:33.243 22:00:15 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:33.243 22:00:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:33.243 22:00:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:33.243 22:00:15 -- common/autotest_common.sh@10 -- # set +x 00:08:33.243 22:00:15 -- nvmf/common.sh@470 -- # nvmfpid=3859277 00:08:33.243 22:00:15 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:33.243 22:00:15 -- nvmf/common.sh@471 -- # waitforlisten 3859277 00:08:33.243 22:00:15 -- common/autotest_common.sh@817 -- # '[' -z 3859277 ']' 00:08:33.243 22:00:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.243 22:00:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:33.243 22:00:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.243 22:00:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:33.243 22:00:15 -- common/autotest_common.sh@10 -- # set +x 00:08:33.243 [2024-04-24 22:00:15.451314] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:08:33.243 [2024-04-24 22:00:15.451421] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.243 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.501 [2024-04-24 22:00:15.528306] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:33.501 [2024-04-24 22:00:15.654347] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.501 [2024-04-24 22:00:15.654426] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.501 [2024-04-24 22:00:15.654444] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:33.501 [2024-04-24 22:00:15.654472] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:33.501 [2024-04-24 22:00:15.654485] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.501 [2024-04-24 22:00:15.654552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.501 [2024-04-24 22:00:15.654590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.501 [2024-04-24 22:00:15.654644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.501 [2024-04-24 22:00:15.654648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.760 22:00:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:33.760 22:00:15 -- common/autotest_common.sh@850 -- # return 0 00:08:33.760 22:00:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:33.760 22:00:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:33.760 22:00:15 -- common/autotest_common.sh@10 -- # set +x 00:08:33.760 22:00:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.760 22:00:15 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:33.760 22:00:15 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:33.760 22:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:33.760 22:00:15 -- common/autotest_common.sh@10 -- # set +x 00:08:33.760 [2024-04-24 22:00:15.828453] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.760 22:00:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:33.760 22:00:15 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:33.760 22:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:33.760 22:00:15 -- common/autotest_common.sh@10 -- # set +x 00:08:33.760 Malloc1 00:08:33.760 22:00:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:33.760 22:00:15 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:33.760 22:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:33.760 22:00:15 -- common/autotest_common.sh@10 -- # set +x 00:08:33.760 22:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:33.760 22:00:16 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:33.760 22:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:33.760 22:00:16 -- common/autotest_common.sh@10 -- # set +x 00:08:33.760 22:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:33.760 22:00:16 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:33.760 22:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:33.760 22:00:16 -- common/autotest_common.sh@10 -- # set +x 00:08:34.033 [2024-04-24 22:00:16.017087] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:34.033 [2024-04-24 22:00:16.017456] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.033 22:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.033 22:00:16 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:34.033 22:00:16 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:08:34.033 22:00:16 -- common/autotest_common.sh@1365 -- # local bdev_info 00:08:34.033 22:00:16 -- common/autotest_common.sh@1366 -- # local bs 00:08:34.033 22:00:16 -- common/autotest_common.sh@1367 -- # local nb 00:08:34.033 22:00:16 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:34.033 22:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.033 22:00:16 -- common/autotest_common.sh@10 -- # set +x 00:08:34.033 22:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.033 22:00:16 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:08:34.033 { 00:08:34.033 "name": "Malloc1", 00:08:34.033 "aliases": [ 00:08:34.033 "56aff205-88d8-49de-924b-61e44c3838f8" 00:08:34.033 ], 00:08:34.033 "product_name": "Malloc disk", 00:08:34.033 "block_size": 512, 00:08:34.033 "num_blocks": 1048576, 00:08:34.033 "uuid": "56aff205-88d8-49de-924b-61e44c3838f8", 00:08:34.033 "assigned_rate_limits": { 00:08:34.033 "rw_ios_per_sec": 0, 00:08:34.033 "rw_mbytes_per_sec": 0, 00:08:34.033 "r_mbytes_per_sec": 0, 00:08:34.033 "w_mbytes_per_sec": 0 00:08:34.033 }, 00:08:34.033 "claimed": true, 00:08:34.033 "claim_type": "exclusive_write", 00:08:34.033 "zoned": false, 00:08:34.033 "supported_io_types": { 00:08:34.033 "read": true, 00:08:34.033 "write": true, 00:08:34.033 "unmap": true, 00:08:34.033 "write_zeroes": true, 00:08:34.033 "flush": true, 00:08:34.033 "reset": true, 00:08:34.033 "compare": false, 00:08:34.033 "compare_and_write": false, 00:08:34.033 "abort": true, 00:08:34.033 "nvme_admin": false, 00:08:34.033 "nvme_io": false 00:08:34.033 }, 00:08:34.033 "memory_domains": [ 00:08:34.033 { 00:08:34.033 "dma_device_id": "system", 00:08:34.033 "dma_device_type": 1 00:08:34.033 }, 00:08:34.033 { 00:08:34.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.033 "dma_device_type": 2 00:08:34.033 } 00:08:34.033 ], 00:08:34.033 "driver_specific": {} 00:08:34.033 } 00:08:34.033 ]' 00:08:34.033 22:00:16 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:08:34.033 22:00:16 -- common/autotest_common.sh@1369 -- # bs=512 00:08:34.033 22:00:16 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:08:34.034 22:00:16 -- common/autotest_common.sh@1370 -- # nb=1048576 00:08:34.034 22:00:16 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:08:34.034 22:00:16 -- common/autotest_common.sh@1374 -- # echo 512 00:08:34.034 22:00:16 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:34.034 22:00:16 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:34.606 22:00:16 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:34.606 22:00:16 -- common/autotest_common.sh@1184 -- # local i=0 00:08:34.606 22:00:16 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:34.606 22:00:16 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:34.606 22:00:16 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:36.502 22:00:18 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:36.502 22:00:18 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:36.502 22:00:18 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:36.502 22:00:18 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:36.502 22:00:18 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:36.502 22:00:18 -- common/autotest_common.sh@1194 -- # return 0 00:08:36.502 22:00:18 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:36.502 22:00:18 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:36.502 22:00:18 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:36.502 22:00:18 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:36.502 22:00:18 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:36.502 22:00:18 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:36.502 22:00:18 -- setup/common.sh@80 -- # echo 536870912 00:08:36.502 22:00:18 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:36.502 22:00:18 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:36.502 22:00:18 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:36.502 22:00:18 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:37.067 22:00:19 -- target/filesystem.sh@69 -- # partprobe 00:08:37.633 22:00:19 -- target/filesystem.sh@70 -- # sleep 1 00:08:38.565 22:00:20 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:38.565 22:00:20 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:38.565 22:00:20 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:38.565 22:00:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:38.565 22:00:20 -- common/autotest_common.sh@10 -- # set +x 00:08:38.824 ************************************ 00:08:38.824 START TEST filesystem_in_capsule_ext4 00:08:38.824 ************************************ 00:08:38.824 22:00:20 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:38.824 22:00:20 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:38.824 22:00:20 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:38.824 22:00:20 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:38.824 22:00:20 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:38.824 22:00:20 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:38.824 22:00:20 -- common/autotest_common.sh@914 -- # local i=0 00:08:38.824 22:00:20 -- common/autotest_common.sh@915 -- # local force 00:08:38.824 22:00:20 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:38.824 22:00:20 -- common/autotest_common.sh@918 -- # force=-F 00:08:38.824 22:00:20 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:38.824 mke2fs 1.46.5 (30-Dec-2021) 00:08:38.824 Discarding device blocks: 0/522240 done 00:08:38.824 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:38.824 Filesystem UUID: 13cd7307-e91e-4a69-a451-4367fa3cca90 00:08:38.824 Superblock backups stored on blocks: 00:08:38.824 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:38.824 00:08:38.824 Allocating group tables: 0/64 done 00:08:38.824 Writing inode tables: 0/64 done 00:08:40.197 Creating journal (8192 blocks): done 00:08:40.197 Writing superblocks and filesystem accounting information: 0/64 done 00:08:40.197 00:08:40.197 22:00:22 -- common/autotest_common.sh@931 -- # return 0 00:08:40.197 22:00:22 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:40.761 22:00:22 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:40.761 22:00:22 -- target/filesystem.sh@25 -- # sync 00:08:40.761 22:00:22 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:40.761 22:00:22 -- target/filesystem.sh@27 -- # sync 00:08:40.761 22:00:23 -- target/filesystem.sh@29 -- # i=0 00:08:40.761 22:00:23 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:40.761 22:00:23 -- target/filesystem.sh@37 -- # kill -0 3859277 00:08:40.761 22:00:23 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:40.761 22:00:23 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:40.761 22:00:23 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:41.039 22:00:23 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:41.039 00:08:41.039 real 0m2.135s 00:08:41.039 user 0m0.012s 00:08:41.039 sys 0m0.042s 00:08:41.039 22:00:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:41.039 22:00:23 -- common/autotest_common.sh@10 -- # set +x 00:08:41.039 ************************************ 00:08:41.039 END TEST filesystem_in_capsule_ext4 00:08:41.039 ************************************ 00:08:41.039 22:00:23 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:41.039 22:00:23 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:41.039 22:00:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:41.039 22:00:23 -- common/autotest_common.sh@10 -- # set +x 00:08:41.039 ************************************ 00:08:41.039 START TEST filesystem_in_capsule_btrfs 00:08:41.039 ************************************ 00:08:41.040 22:00:23 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:41.040 22:00:23 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:41.040 22:00:23 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:41.040 22:00:23 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:41.040 22:00:23 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:41.040 22:00:23 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:41.040 22:00:23 -- common/autotest_common.sh@914 -- # local i=0 00:08:41.040 22:00:23 -- common/autotest_common.sh@915 -- # local force 00:08:41.040 22:00:23 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:41.040 22:00:23 -- common/autotest_common.sh@920 -- # force=-f 00:08:41.040 22:00:23 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:41.606 btrfs-progs v6.6.2 00:08:41.606 See https://btrfs.readthedocs.io for more information. 00:08:41.606 00:08:41.606 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:41.606 NOTE: several default settings have changed in version 5.15, please make sure 00:08:41.606 this does not affect your deployments: 00:08:41.606 - DUP for metadata (-m dup) 00:08:41.606 - enabled no-holes (-O no-holes) 00:08:41.606 - enabled free-space-tree (-R free-space-tree) 00:08:41.606 00:08:41.606 Label: (null) 00:08:41.606 UUID: adec5944-3134-4652-966d-be64a41baca7 00:08:41.606 Node size: 16384 00:08:41.606 Sector size: 4096 00:08:41.606 Filesystem size: 510.00MiB 00:08:41.606 Block group profiles: 00:08:41.606 Data: single 8.00MiB 00:08:41.606 Metadata: DUP 32.00MiB 00:08:41.606 System: DUP 8.00MiB 00:08:41.606 SSD detected: yes 00:08:41.606 Zoned device: no 00:08:41.606 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:41.606 Runtime features: free-space-tree 00:08:41.606 Checksum: crc32c 00:08:41.606 Number of devices: 1 00:08:41.606 Devices: 00:08:41.606 ID SIZE PATH 00:08:41.606 1 510.00MiB /dev/nvme0n1p1 00:08:41.606 00:08:41.606 22:00:23 -- common/autotest_common.sh@931 -- # return 0 00:08:41.606 22:00:23 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:41.606 22:00:23 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:41.606 22:00:23 -- target/filesystem.sh@25 -- # sync 00:08:41.606 22:00:23 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:41.606 22:00:23 -- target/filesystem.sh@27 -- # sync 00:08:41.606 22:00:23 -- target/filesystem.sh@29 -- # i=0 00:08:41.606 22:00:23 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:41.865 22:00:23 -- target/filesystem.sh@37 -- # kill -0 3859277 00:08:41.865 22:00:23 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:41.865 22:00:23 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:41.865 22:00:23 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:41.865 22:00:23 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:41.865 00:08:41.865 real 0m0.702s 00:08:41.865 user 0m0.013s 00:08:41.865 sys 0m0.044s 00:08:41.865 22:00:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:41.865 22:00:23 -- common/autotest_common.sh@10 -- # set +x 00:08:41.865 ************************************ 00:08:41.865 END TEST filesystem_in_capsule_btrfs 00:08:41.865 ************************************ 00:08:41.865 22:00:23 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:41.865 22:00:23 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:41.865 22:00:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:41.865 22:00:23 -- common/autotest_common.sh@10 -- # set +x 00:08:41.865 ************************************ 00:08:41.865 START TEST filesystem_in_capsule_xfs 00:08:41.865 ************************************ 00:08:41.865 22:00:23 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:08:41.865 22:00:23 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:41.865 22:00:23 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:41.865 22:00:23 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:41.865 22:00:23 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:41.865 22:00:23 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:41.865 22:00:23 -- common/autotest_common.sh@914 -- # local i=0 00:08:41.865 22:00:23 -- common/autotest_common.sh@915 -- # local force 00:08:41.865 22:00:23 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:41.865 22:00:23 -- common/autotest_common.sh@920 -- # force=-f 00:08:41.865 22:00:23 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:41.865 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:41.865 = sectsz=512 attr=2, projid32bit=1 00:08:41.865 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:41.865 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:41.865 data = bsize=4096 blocks=130560, imaxpct=25 00:08:41.865 = sunit=0 swidth=0 blks 00:08:41.865 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:41.865 log =internal log bsize=4096 blocks=16384, version=2 00:08:41.866 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:41.866 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:43.238 Discarding blocks...Done. 00:08:43.238 22:00:25 -- common/autotest_common.sh@931 -- # return 0 00:08:43.238 22:00:25 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:45.762 22:00:27 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:45.762 22:00:27 -- target/filesystem.sh@25 -- # sync 00:08:45.762 22:00:27 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:45.762 22:00:27 -- target/filesystem.sh@27 -- # sync 00:08:45.762 22:00:27 -- target/filesystem.sh@29 -- # i=0 00:08:45.762 22:00:27 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:45.762 22:00:27 -- target/filesystem.sh@37 -- # kill -0 3859277 00:08:45.762 22:00:27 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:45.762 22:00:27 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:45.762 22:00:27 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:45.762 22:00:27 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:45.762 00:08:45.762 real 0m3.628s 00:08:45.762 user 0m0.016s 00:08:45.762 sys 0m0.037s 00:08:45.762 22:00:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:45.762 22:00:27 -- common/autotest_common.sh@10 -- # set +x 00:08:45.762 ************************************ 00:08:45.762 END TEST filesystem_in_capsule_xfs 00:08:45.762 ************************************ 00:08:45.762 22:00:27 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:45.762 22:00:27 -- target/filesystem.sh@93 -- # sync 00:08:45.762 22:00:27 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:45.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.762 22:00:27 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:45.762 22:00:27 -- common/autotest_common.sh@1205 -- # local i=0 00:08:45.762 22:00:27 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:45.762 22:00:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:45.762 22:00:27 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:45.762 22:00:27 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:45.762 22:00:27 -- common/autotest_common.sh@1217 -- # return 0 00:08:45.762 22:00:27 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:45.762 22:00:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:45.762 22:00:27 -- common/autotest_common.sh@10 -- # set +x 00:08:45.762 22:00:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:45.762 22:00:27 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:45.762 22:00:27 -- target/filesystem.sh@101 -- # killprocess 3859277 00:08:45.762 22:00:27 -- common/autotest_common.sh@936 -- # '[' -z 3859277 ']' 00:08:45.762 22:00:27 -- common/autotest_common.sh@940 -- # kill -0 3859277 00:08:45.762 22:00:27 -- common/autotest_common.sh@941 -- # uname 00:08:45.762 22:00:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:45.762 22:00:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3859277 00:08:45.762 22:00:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:45.762 22:00:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:45.762 22:00:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3859277' 00:08:45.762 killing process with pid 3859277 00:08:45.762 22:00:27 -- common/autotest_common.sh@955 -- # kill 3859277 00:08:45.762 [2024-04-24 22:00:27.853254] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:45.762 22:00:27 -- common/autotest_common.sh@960 -- # wait 3859277 00:08:46.329 22:00:28 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:46.329 00:08:46.329 real 0m12.977s 00:08:46.329 user 0m49.923s 00:08:46.329 sys 0m1.695s 00:08:46.329 22:00:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:46.329 22:00:28 -- common/autotest_common.sh@10 -- # set +x 00:08:46.329 ************************************ 00:08:46.329 END TEST nvmf_filesystem_in_capsule 00:08:46.329 ************************************ 00:08:46.329 22:00:28 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:46.329 22:00:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:46.329 22:00:28 -- nvmf/common.sh@117 -- # sync 00:08:46.329 22:00:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:46.329 22:00:28 -- nvmf/common.sh@120 -- # set +e 00:08:46.329 22:00:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:46.329 22:00:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:46.329 rmmod nvme_tcp 00:08:46.329 rmmod nvme_fabrics 00:08:46.329 rmmod nvme_keyring 00:08:46.329 22:00:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:46.329 22:00:28 -- nvmf/common.sh@124 -- # set -e 00:08:46.329 22:00:28 -- nvmf/common.sh@125 -- # return 0 00:08:46.329 22:00:28 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:08:46.329 22:00:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:46.329 22:00:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:46.329 22:00:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:46.329 22:00:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:46.329 22:00:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:46.329 22:00:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.329 22:00:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:46.329 22:00:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.855 22:00:30 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:48.855 00:08:48.855 real 0m29.909s 00:08:48.855 user 1m36.331s 00:08:48.855 sys 0m5.531s 00:08:48.855 22:00:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:48.855 22:00:30 -- common/autotest_common.sh@10 -- # set +x 00:08:48.855 ************************************ 00:08:48.855 END TEST nvmf_filesystem 00:08:48.855 ************************************ 00:08:48.855 22:00:30 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:48.855 22:00:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:48.855 22:00:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:48.855 22:00:30 -- common/autotest_common.sh@10 -- # set +x 00:08:48.855 ************************************ 00:08:48.855 START TEST nvmf_discovery 00:08:48.855 ************************************ 00:08:48.856 22:00:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:48.856 * Looking for test storage... 00:08:48.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:48.856 22:00:30 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:48.856 22:00:30 -- nvmf/common.sh@7 -- # uname -s 00:08:48.856 22:00:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.856 22:00:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.856 22:00:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.856 22:00:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.856 22:00:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.856 22:00:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.856 22:00:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.856 22:00:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.856 22:00:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.856 22:00:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.856 22:00:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:48.856 22:00:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:48.856 22:00:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.856 22:00:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.856 22:00:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:48.856 22:00:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:48.856 22:00:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:48.856 22:00:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.856 22:00:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.856 22:00:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.856 22:00:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.856 22:00:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.856 22:00:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.856 22:00:30 -- paths/export.sh@5 -- # export PATH 00:08:48.856 22:00:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.856 22:00:30 -- nvmf/common.sh@47 -- # : 0 00:08:48.856 22:00:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:48.856 22:00:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:48.856 22:00:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.856 22:00:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.856 22:00:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.856 22:00:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:48.856 22:00:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:48.856 22:00:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:48.856 22:00:30 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:48.856 22:00:30 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:48.856 22:00:30 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:48.856 22:00:30 -- target/discovery.sh@15 -- # hash nvme 00:08:48.856 22:00:30 -- target/discovery.sh@20 -- # nvmftestinit 00:08:48.856 22:00:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:48.856 22:00:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.856 22:00:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:48.856 22:00:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:48.856 22:00:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:48.856 22:00:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.856 22:00:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:48.856 22:00:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.856 22:00:30 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:48.856 22:00:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:48.856 22:00:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:48.856 22:00:30 -- common/autotest_common.sh@10 -- # set +x 00:08:51.397 22:00:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:51.397 22:00:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:51.397 22:00:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:51.397 22:00:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:51.397 22:00:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:51.397 22:00:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:51.397 22:00:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:51.397 22:00:33 -- nvmf/common.sh@295 -- # net_devs=() 00:08:51.397 22:00:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:51.397 22:00:33 -- nvmf/common.sh@296 -- # e810=() 00:08:51.397 22:00:33 -- nvmf/common.sh@296 -- # local -ga e810 00:08:51.397 22:00:33 -- nvmf/common.sh@297 -- # x722=() 00:08:51.397 22:00:33 -- nvmf/common.sh@297 -- # local -ga x722 00:08:51.397 22:00:33 -- nvmf/common.sh@298 -- # mlx=() 00:08:51.397 22:00:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:51.397 22:00:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:51.397 22:00:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:51.397 22:00:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:51.397 22:00:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:51.397 22:00:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:51.397 22:00:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:51.397 22:00:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:51.397 22:00:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:51.397 22:00:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:51.397 22:00:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:51.397 22:00:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:51.397 22:00:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:51.397 22:00:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:51.397 22:00:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:51.397 22:00:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:51.397 22:00:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:51.397 22:00:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:51.397 22:00:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:51.397 22:00:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:51.397 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:51.397 22:00:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:51.397 22:00:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:51.397 22:00:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.397 22:00:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.397 22:00:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:51.397 22:00:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:51.397 22:00:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:51.397 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:51.397 22:00:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:51.397 22:00:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:51.397 22:00:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.397 22:00:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.397 22:00:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:51.397 22:00:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:51.397 22:00:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:51.397 22:00:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:51.397 22:00:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:51.397 22:00:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.397 22:00:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:51.397 22:00:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.397 22:00:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:51.397 Found net devices under 0000:84:00.0: cvl_0_0 00:08:51.397 22:00:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.397 22:00:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:51.397 22:00:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.397 22:00:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:51.397 22:00:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.397 22:00:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:51.397 Found net devices under 0000:84:00.1: cvl_0_1 00:08:51.397 22:00:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.397 22:00:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:51.397 22:00:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:51.397 22:00:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:51.397 22:00:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:51.397 22:00:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:51.397 22:00:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.397 22:00:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:51.398 22:00:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:51.398 22:00:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:51.398 22:00:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:51.398 22:00:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:51.398 22:00:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:51.398 22:00:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:51.398 22:00:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.398 22:00:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:51.398 22:00:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:51.398 22:00:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:51.398 22:00:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:51.398 22:00:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:51.398 22:00:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:51.398 22:00:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:51.398 22:00:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:51.398 22:00:33 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:51.398 22:00:33 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:51.398 22:00:33 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:51.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:08:51.398 00:08:51.398 --- 10.0.0.2 ping statistics --- 00:08:51.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.398 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:08:51.398 22:00:33 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:51.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:08:51.398 00:08:51.398 --- 10.0.0.1 ping statistics --- 00:08:51.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.398 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:08:51.398 22:00:33 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.398 22:00:33 -- nvmf/common.sh@411 -- # return 0 00:08:51.398 22:00:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:51.398 22:00:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.398 22:00:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:51.398 22:00:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:51.398 22:00:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.398 22:00:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:51.398 22:00:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:51.398 22:00:33 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:51.398 22:00:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:51.398 22:00:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:51.398 22:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:51.398 22:00:33 -- nvmf/common.sh@470 -- # nvmfpid=3862937 00:08:51.398 22:00:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:51.398 22:00:33 -- nvmf/common.sh@471 -- # waitforlisten 3862937 00:08:51.398 22:00:33 -- common/autotest_common.sh@817 -- # '[' -z 3862937 ']' 00:08:51.398 22:00:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.398 22:00:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:51.398 22:00:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.398 22:00:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:51.398 22:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:51.398 [2024-04-24 22:00:33.384281] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:08:51.398 [2024-04-24 22:00:33.384373] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.398 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.398 [2024-04-24 22:00:33.461021] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:51.398 [2024-04-24 22:00:33.585184] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.398 [2024-04-24 22:00:33.585247] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.398 [2024-04-24 22:00:33.585264] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.398 [2024-04-24 22:00:33.585277] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.398 [2024-04-24 22:00:33.585289] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.398 [2024-04-24 22:00:33.585388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.398 [2024-04-24 22:00:33.585443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.398 [2024-04-24 22:00:33.585496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:51.398 [2024-04-24 22:00:33.585499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.656 22:00:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:51.656 22:00:33 -- common/autotest_common.sh@850 -- # return 0 00:08:51.656 22:00:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:51.656 22:00:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:51.656 22:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:51.656 22:00:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.656 22:00:33 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:51.656 22:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.656 22:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:51.656 [2024-04-24 22:00:33.748459] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.656 22:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.656 22:00:33 -- target/discovery.sh@26 -- # seq 1 4 00:08:51.656 22:00:33 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:51.656 22:00:33 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:51.656 22:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.656 22:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:51.656 Null1 00:08:51.656 22:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.656 22:00:33 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:51.656 22:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.656 22:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:51.656 22:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.656 22:00:33 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:51.656 22:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.656 22:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:51.656 22:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.656 22:00:33 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:51.656 22:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.656 22:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:51.656 [2024-04-24 22:00:33.788496] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:51.656 [2024-04-24 22:00:33.788816] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:51.656 22:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.656 22:00:33 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:51.656 22:00:33 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:51.656 22:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.656 22:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:51.656 Null2 00:08:51.656 22:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.656 22:00:33 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:51.656 22:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.656 22:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:51.656 22:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.656 22:00:33 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:51.656 22:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.656 22:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:51.656 22:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.656 22:00:33 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:51.656 22:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.656 22:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:51.656 22:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.656 22:00:33 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:51.656 22:00:33 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:51.656 22:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.656 22:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:51.656 Null3 00:08:51.656 22:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.656 22:00:33 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:51.656 22:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.656 22:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:51.656 22:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.656 22:00:33 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:51.656 22:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.656 22:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:51.656 22:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.656 22:00:33 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:51.656 22:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.656 22:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:51.656 22:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.656 22:00:33 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:51.656 22:00:33 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:51.656 22:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.656 22:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:51.656 Null4 00:08:51.656 22:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.656 22:00:33 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:51.656 22:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.656 22:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:51.656 22:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.656 22:00:33 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:51.656 22:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.657 22:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:51.657 22:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.657 22:00:33 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:51.657 22:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.657 22:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:51.657 22:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.657 22:00:33 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:51.657 22:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.657 22:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:51.657 22:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.657 22:00:33 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:51.657 22:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.657 22:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:51.657 22:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.657 22:00:33 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:08:51.914 00:08:51.914 Discovery Log Number of Records 6, Generation counter 6 00:08:51.914 =====Discovery Log Entry 0====== 00:08:51.914 trtype: tcp 00:08:51.914 adrfam: ipv4 00:08:51.914 subtype: current discovery subsystem 00:08:51.914 treq: not required 00:08:51.914 portid: 0 00:08:51.914 trsvcid: 4420 00:08:51.914 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:51.914 traddr: 10.0.0.2 00:08:51.914 eflags: explicit discovery connections, duplicate discovery information 00:08:51.914 sectype: none 00:08:51.914 =====Discovery Log Entry 1====== 00:08:51.914 trtype: tcp 00:08:51.914 adrfam: ipv4 00:08:51.914 subtype: nvme subsystem 00:08:51.914 treq: not required 00:08:51.914 portid: 0 00:08:51.914 trsvcid: 4420 00:08:51.914 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:51.914 traddr: 10.0.0.2 00:08:51.914 eflags: none 00:08:51.914 sectype: none 00:08:51.914 =====Discovery Log Entry 2====== 00:08:51.914 trtype: tcp 00:08:51.914 adrfam: ipv4 00:08:51.914 subtype: nvme subsystem 00:08:51.914 treq: not required 00:08:51.914 portid: 0 00:08:51.914 trsvcid: 4420 00:08:51.914 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:51.914 traddr: 10.0.0.2 00:08:51.914 eflags: none 00:08:51.914 sectype: none 00:08:51.914 =====Discovery Log Entry 3====== 00:08:51.914 trtype: tcp 00:08:51.914 adrfam: ipv4 00:08:51.914 subtype: nvme subsystem 00:08:51.914 treq: not required 00:08:51.914 portid: 0 00:08:51.914 trsvcid: 4420 00:08:51.914 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:51.914 traddr: 10.0.0.2 00:08:51.914 eflags: none 00:08:51.914 sectype: none 00:08:51.914 =====Discovery Log Entry 4====== 00:08:51.914 trtype: tcp 00:08:51.914 adrfam: ipv4 00:08:51.914 subtype: nvme subsystem 00:08:51.914 treq: not required 00:08:51.914 portid: 0 00:08:51.914 trsvcid: 4420 00:08:51.914 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:51.914 traddr: 10.0.0.2 00:08:51.914 eflags: none 00:08:51.914 sectype: none 00:08:51.914 =====Discovery Log Entry 5====== 00:08:51.915 trtype: tcp 00:08:51.915 adrfam: ipv4 00:08:51.915 subtype: discovery subsystem referral 00:08:51.915 treq: not required 00:08:51.915 portid: 0 00:08:51.915 trsvcid: 4430 00:08:51.915 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:51.915 traddr: 10.0.0.2 00:08:51.915 eflags: none 00:08:51.915 sectype: none 00:08:51.915 22:00:33 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:51.915 Perform nvmf subsystem discovery via RPC 00:08:51.915 22:00:33 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:51.915 22:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.915 22:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:51.915 [2024-04-24 22:00:33.969107] nvmf_rpc.c: 276:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:51.915 [ 00:08:51.915 { 00:08:51.915 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:51.915 "subtype": "Discovery", 00:08:51.915 "listen_addresses": [ 00:08:51.915 { 00:08:51.915 "transport": "TCP", 00:08:51.915 "trtype": "TCP", 00:08:51.915 "adrfam": "IPv4", 00:08:51.915 "traddr": "10.0.0.2", 00:08:51.915 "trsvcid": "4420" 00:08:51.915 } 00:08:51.915 ], 00:08:51.915 "allow_any_host": true, 00:08:51.915 "hosts": [] 00:08:51.915 }, 00:08:51.915 { 00:08:51.915 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:51.915 "subtype": "NVMe", 00:08:51.915 "listen_addresses": [ 00:08:51.915 { 00:08:51.915 "transport": "TCP", 00:08:51.915 "trtype": "TCP", 00:08:51.915 "adrfam": "IPv4", 00:08:51.915 "traddr": "10.0.0.2", 00:08:51.915 "trsvcid": "4420" 00:08:51.915 } 00:08:51.915 ], 00:08:51.915 "allow_any_host": true, 00:08:51.915 "hosts": [], 00:08:51.915 "serial_number": "SPDK00000000000001", 00:08:51.915 "model_number": "SPDK bdev Controller", 00:08:51.915 "max_namespaces": 32, 00:08:51.915 "min_cntlid": 1, 00:08:51.915 "max_cntlid": 65519, 00:08:51.915 "namespaces": [ 00:08:51.915 { 00:08:51.915 "nsid": 1, 00:08:51.915 "bdev_name": "Null1", 00:08:51.915 "name": "Null1", 00:08:51.915 "nguid": "25974757D23A46A488A33F4FA7B0ADE9", 00:08:51.915 "uuid": "25974757-d23a-46a4-88a3-3f4fa7b0ade9" 00:08:51.915 } 00:08:51.915 ] 00:08:51.915 }, 00:08:51.915 { 00:08:51.915 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:51.915 "subtype": "NVMe", 00:08:51.915 "listen_addresses": [ 00:08:51.915 { 00:08:51.915 "transport": "TCP", 00:08:51.915 "trtype": "TCP", 00:08:51.915 "adrfam": "IPv4", 00:08:51.915 "traddr": "10.0.0.2", 00:08:51.915 "trsvcid": "4420" 00:08:51.915 } 00:08:51.915 ], 00:08:51.915 "allow_any_host": true, 00:08:51.915 "hosts": [], 00:08:51.915 "serial_number": "SPDK00000000000002", 00:08:51.915 "model_number": "SPDK bdev Controller", 00:08:51.915 "max_namespaces": 32, 00:08:51.915 "min_cntlid": 1, 00:08:51.915 "max_cntlid": 65519, 00:08:51.915 "namespaces": [ 00:08:51.915 { 00:08:51.915 "nsid": 1, 00:08:51.915 "bdev_name": "Null2", 00:08:51.915 "name": "Null2", 00:08:51.915 "nguid": "646D4B72D83047CF85488D006E14EBD3", 00:08:51.915 "uuid": "646d4b72-d830-47cf-8548-8d006e14ebd3" 00:08:51.915 } 00:08:51.915 ] 00:08:51.915 }, 00:08:51.915 { 00:08:51.915 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:51.915 "subtype": "NVMe", 00:08:51.915 "listen_addresses": [ 00:08:51.915 { 00:08:51.915 "transport": "TCP", 00:08:51.915 "trtype": "TCP", 00:08:51.915 "adrfam": "IPv4", 00:08:51.915 "traddr": "10.0.0.2", 00:08:51.915 "trsvcid": "4420" 00:08:51.915 } 00:08:51.915 ], 00:08:51.915 "allow_any_host": true, 00:08:51.915 "hosts": [], 00:08:51.915 "serial_number": "SPDK00000000000003", 00:08:51.915 "model_number": "SPDK bdev Controller", 00:08:51.915 "max_namespaces": 32, 00:08:51.915 "min_cntlid": 1, 00:08:51.915 "max_cntlid": 65519, 00:08:51.915 "namespaces": [ 00:08:51.915 { 00:08:51.915 "nsid": 1, 00:08:51.915 "bdev_name": "Null3", 00:08:51.915 "name": "Null3", 00:08:51.915 "nguid": "A738A048CFAE4A4D813EBB511FB38D70", 00:08:51.915 "uuid": "a738a048-cfae-4a4d-813e-bb511fb38d70" 00:08:51.915 } 00:08:51.915 ] 00:08:51.915 }, 00:08:51.915 { 00:08:51.915 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:51.915 "subtype": "NVMe", 00:08:51.915 "listen_addresses": [ 00:08:51.915 { 00:08:51.915 "transport": "TCP", 00:08:51.915 "trtype": "TCP", 00:08:51.915 "adrfam": "IPv4", 00:08:51.915 "traddr": "10.0.0.2", 00:08:51.915 "trsvcid": "4420" 00:08:51.915 } 00:08:51.915 ], 00:08:51.915 "allow_any_host": true, 00:08:51.915 "hosts": [], 00:08:51.915 "serial_number": "SPDK00000000000004", 00:08:51.915 "model_number": "SPDK bdev Controller", 00:08:51.915 "max_namespaces": 32, 00:08:51.915 "min_cntlid": 1, 00:08:51.915 "max_cntlid": 65519, 00:08:51.915 "namespaces": [ 00:08:51.915 { 00:08:51.915 "nsid": 1, 00:08:51.915 "bdev_name": "Null4", 00:08:51.915 "name": "Null4", 00:08:51.915 "nguid": "5A85410E802B4B338E5E5146A8CA68A7", 00:08:51.915 "uuid": "5a85410e-802b-4b33-8e5e-5146a8ca68a7" 00:08:51.915 } 00:08:51.915 ] 00:08:51.915 } 00:08:51.915 ] 00:08:51.915 22:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.915 22:00:33 -- target/discovery.sh@42 -- # seq 1 4 00:08:51.915 22:00:33 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:51.915 22:00:33 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:51.915 22:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.915 22:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:51.915 22:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.915 22:00:33 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:51.915 22:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.915 22:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:51.915 22:00:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.915 22:00:34 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:51.915 22:00:34 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:51.915 22:00:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.915 22:00:34 -- common/autotest_common.sh@10 -- # set +x 00:08:51.915 22:00:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.915 22:00:34 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:51.915 22:00:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.915 22:00:34 -- common/autotest_common.sh@10 -- # set +x 00:08:51.915 22:00:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.915 22:00:34 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:51.915 22:00:34 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:51.915 22:00:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.915 22:00:34 -- common/autotest_common.sh@10 -- # set +x 00:08:51.915 22:00:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.915 22:00:34 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:51.915 22:00:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.915 22:00:34 -- common/autotest_common.sh@10 -- # set +x 00:08:51.915 22:00:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.915 22:00:34 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:51.915 22:00:34 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:51.915 22:00:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.915 22:00:34 -- common/autotest_common.sh@10 -- # set +x 00:08:51.915 22:00:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.915 22:00:34 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:51.915 22:00:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.915 22:00:34 -- common/autotest_common.sh@10 -- # set +x 00:08:51.915 22:00:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.915 22:00:34 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:51.915 22:00:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.915 22:00:34 -- common/autotest_common.sh@10 -- # set +x 00:08:51.915 22:00:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.915 22:00:34 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:51.915 22:00:34 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:51.915 22:00:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.915 22:00:34 -- common/autotest_common.sh@10 -- # set +x 00:08:51.915 22:00:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.915 22:00:34 -- target/discovery.sh@49 -- # check_bdevs= 00:08:51.915 22:00:34 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:51.915 22:00:34 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:51.915 22:00:34 -- target/discovery.sh@57 -- # nvmftestfini 00:08:51.915 22:00:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:51.915 22:00:34 -- nvmf/common.sh@117 -- # sync 00:08:51.915 22:00:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:51.915 22:00:34 -- nvmf/common.sh@120 -- # set +e 00:08:51.915 22:00:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:51.915 22:00:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:51.915 rmmod nvme_tcp 00:08:51.915 rmmod nvme_fabrics 00:08:51.915 rmmod nvme_keyring 00:08:51.915 22:00:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:51.915 22:00:34 -- nvmf/common.sh@124 -- # set -e 00:08:51.915 22:00:34 -- nvmf/common.sh@125 -- # return 0 00:08:51.915 22:00:34 -- nvmf/common.sh@478 -- # '[' -n 3862937 ']' 00:08:51.915 22:00:34 -- nvmf/common.sh@479 -- # killprocess 3862937 00:08:51.915 22:00:34 -- common/autotest_common.sh@936 -- # '[' -z 3862937 ']' 00:08:51.915 22:00:34 -- common/autotest_common.sh@940 -- # kill -0 3862937 00:08:51.915 22:00:34 -- common/autotest_common.sh@941 -- # uname 00:08:51.915 22:00:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:51.915 22:00:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3862937 00:08:52.175 22:00:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:52.175 22:00:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:52.175 22:00:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3862937' 00:08:52.175 killing process with pid 3862937 00:08:52.175 22:00:34 -- common/autotest_common.sh@955 -- # kill 3862937 00:08:52.175 [2024-04-24 22:00:34.173740] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:52.175 [2024-04-24 22:00:34.173774] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:52.175 22:00:34 -- common/autotest_common.sh@960 -- # wait 3862937 00:08:52.461 22:00:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:52.461 22:00:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:52.461 22:00:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:52.461 22:00:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:52.461 22:00:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:52.461 22:00:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.461 22:00:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:52.461 22:00:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.368 22:00:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:54.368 00:08:54.368 real 0m5.896s 00:08:54.368 user 0m4.287s 00:08:54.368 sys 0m2.217s 00:08:54.368 22:00:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:54.368 22:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:54.368 ************************************ 00:08:54.368 END TEST nvmf_discovery 00:08:54.368 ************************************ 00:08:54.368 22:00:36 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:54.368 22:00:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:54.368 22:00:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:54.368 22:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:54.625 ************************************ 00:08:54.625 START TEST nvmf_referrals 00:08:54.625 ************************************ 00:08:54.625 22:00:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:54.625 * Looking for test storage... 00:08:54.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:54.625 22:00:36 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:54.625 22:00:36 -- nvmf/common.sh@7 -- # uname -s 00:08:54.625 22:00:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:54.625 22:00:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:54.625 22:00:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:54.625 22:00:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:54.625 22:00:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:54.625 22:00:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:54.625 22:00:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:54.625 22:00:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:54.625 22:00:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:54.625 22:00:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:54.626 22:00:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:54.626 22:00:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:54.626 22:00:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:54.626 22:00:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:54.626 22:00:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:54.626 22:00:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:54.626 22:00:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:54.626 22:00:36 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.626 22:00:36 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.626 22:00:36 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.626 22:00:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.626 22:00:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.626 22:00:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.626 22:00:36 -- paths/export.sh@5 -- # export PATH 00:08:54.626 22:00:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.626 22:00:36 -- nvmf/common.sh@47 -- # : 0 00:08:54.626 22:00:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:54.626 22:00:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:54.626 22:00:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:54.626 22:00:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:54.626 22:00:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:54.626 22:00:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:54.626 22:00:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:54.626 22:00:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:54.626 22:00:36 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:54.626 22:00:36 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:54.626 22:00:36 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:54.626 22:00:36 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:54.626 22:00:36 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:54.626 22:00:36 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:54.626 22:00:36 -- target/referrals.sh@37 -- # nvmftestinit 00:08:54.626 22:00:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:54.626 22:00:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:54.626 22:00:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:54.626 22:00:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:54.626 22:00:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:54.626 22:00:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.626 22:00:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:54.626 22:00:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.626 22:00:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:54.626 22:00:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:54.626 22:00:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:54.626 22:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:57.154 22:00:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:57.154 22:00:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:57.154 22:00:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:57.154 22:00:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:57.154 22:00:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:57.154 22:00:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:57.154 22:00:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:57.154 22:00:39 -- nvmf/common.sh@295 -- # net_devs=() 00:08:57.154 22:00:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:57.154 22:00:39 -- nvmf/common.sh@296 -- # e810=() 00:08:57.154 22:00:39 -- nvmf/common.sh@296 -- # local -ga e810 00:08:57.154 22:00:39 -- nvmf/common.sh@297 -- # x722=() 00:08:57.154 22:00:39 -- nvmf/common.sh@297 -- # local -ga x722 00:08:57.154 22:00:39 -- nvmf/common.sh@298 -- # mlx=() 00:08:57.154 22:00:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:57.154 22:00:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:57.154 22:00:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:57.154 22:00:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:57.154 22:00:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:57.154 22:00:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:57.154 22:00:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:57.154 22:00:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:57.154 22:00:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:57.154 22:00:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:57.154 22:00:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:57.154 22:00:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:57.154 22:00:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:57.154 22:00:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:57.154 22:00:39 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:57.154 22:00:39 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:57.154 22:00:39 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:57.154 22:00:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:57.154 22:00:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:57.154 22:00:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:57.154 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:57.154 22:00:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:57.154 22:00:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:57.154 22:00:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.154 22:00:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.154 22:00:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:57.154 22:00:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:57.154 22:00:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:57.154 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:57.154 22:00:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:57.154 22:00:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:57.154 22:00:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.154 22:00:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.154 22:00:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:57.154 22:00:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:57.154 22:00:39 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:57.154 22:00:39 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:57.154 22:00:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:57.154 22:00:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.154 22:00:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:57.154 22:00:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.154 22:00:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:57.154 Found net devices under 0000:84:00.0: cvl_0_0 00:08:57.154 22:00:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.154 22:00:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:57.154 22:00:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.154 22:00:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:57.154 22:00:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.154 22:00:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:57.154 Found net devices under 0000:84:00.1: cvl_0_1 00:08:57.154 22:00:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.154 22:00:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:57.154 22:00:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:57.154 22:00:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:57.154 22:00:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:57.154 22:00:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:57.154 22:00:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:57.154 22:00:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:57.154 22:00:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:57.154 22:00:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:57.154 22:00:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:57.154 22:00:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:57.154 22:00:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:57.154 22:00:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:57.154 22:00:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:57.154 22:00:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:57.154 22:00:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:57.154 22:00:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:57.154 22:00:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:57.154 22:00:39 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:57.154 22:00:39 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:57.154 22:00:39 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:57.154 22:00:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:57.154 22:00:39 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:57.154 22:00:39 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:57.154 22:00:39 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:57.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:08:57.154 00:08:57.154 --- 10.0.0.2 ping statistics --- 00:08:57.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.154 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:08:57.154 22:00:39 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:57.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:08:57.154 00:08:57.154 --- 10.0.0.1 ping statistics --- 00:08:57.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.154 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:08:57.154 22:00:39 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.154 22:00:39 -- nvmf/common.sh@411 -- # return 0 00:08:57.154 22:00:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:57.154 22:00:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.154 22:00:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:57.154 22:00:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:57.154 22:00:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.154 22:00:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:57.154 22:00:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:57.154 22:00:39 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:57.154 22:00:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:57.154 22:00:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:57.154 22:00:39 -- common/autotest_common.sh@10 -- # set +x 00:08:57.154 22:00:39 -- nvmf/common.sh@470 -- # nvmfpid=3865061 00:08:57.154 22:00:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:57.154 22:00:39 -- nvmf/common.sh@471 -- # waitforlisten 3865061 00:08:57.154 22:00:39 -- common/autotest_common.sh@817 -- # '[' -z 3865061 ']' 00:08:57.154 22:00:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.154 22:00:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:57.154 22:00:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.154 22:00:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:57.154 22:00:39 -- common/autotest_common.sh@10 -- # set +x 00:08:57.154 [2024-04-24 22:00:39.378277] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:08:57.154 [2024-04-24 22:00:39.378371] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.413 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.413 [2024-04-24 22:00:39.458279] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:57.413 [2024-04-24 22:00:39.582991] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.413 [2024-04-24 22:00:39.583071] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.413 [2024-04-24 22:00:39.583088] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.413 [2024-04-24 22:00:39.583102] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.413 [2024-04-24 22:00:39.583114] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.413 [2024-04-24 22:00:39.583172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.413 [2024-04-24 22:00:39.583198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.413 [2024-04-24 22:00:39.583254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.413 [2024-04-24 22:00:39.583258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.671 22:00:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:57.671 22:00:39 -- common/autotest_common.sh@850 -- # return 0 00:08:57.671 22:00:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:57.671 22:00:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:57.671 22:00:39 -- common/autotest_common.sh@10 -- # set +x 00:08:57.671 22:00:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.671 22:00:39 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:57.671 22:00:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.671 22:00:39 -- common/autotest_common.sh@10 -- # set +x 00:08:57.671 [2024-04-24 22:00:39.754293] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.671 22:00:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.671 22:00:39 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:57.671 22:00:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.671 22:00:39 -- common/autotest_common.sh@10 -- # set +x 00:08:57.671 [2024-04-24 22:00:39.766265] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:57.671 [2024-04-24 22:00:39.766613] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:57.671 22:00:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.671 22:00:39 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:57.671 22:00:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.671 22:00:39 -- common/autotest_common.sh@10 -- # set +x 00:08:57.671 22:00:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.671 22:00:39 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:57.671 22:00:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.671 22:00:39 -- common/autotest_common.sh@10 -- # set +x 00:08:57.671 22:00:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.671 22:00:39 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:57.671 22:00:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.671 22:00:39 -- common/autotest_common.sh@10 -- # set +x 00:08:57.671 22:00:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.671 22:00:39 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:57.671 22:00:39 -- target/referrals.sh@48 -- # jq length 00:08:57.671 22:00:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.671 22:00:39 -- common/autotest_common.sh@10 -- # set +x 00:08:57.671 22:00:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.671 22:00:39 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:57.671 22:00:39 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:57.671 22:00:39 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:57.671 22:00:39 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:57.671 22:00:39 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:57.671 22:00:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.671 22:00:39 -- common/autotest_common.sh@10 -- # set +x 00:08:57.671 22:00:39 -- target/referrals.sh@21 -- # sort 00:08:57.671 22:00:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.671 22:00:39 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:57.671 22:00:39 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:57.671 22:00:39 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:57.671 22:00:39 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:57.671 22:00:39 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:57.671 22:00:39 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:57.671 22:00:39 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:57.671 22:00:39 -- target/referrals.sh@26 -- # sort 00:08:57.928 22:00:39 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:57.928 22:00:39 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:57.928 22:00:39 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:57.928 22:00:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.928 22:00:39 -- common/autotest_common.sh@10 -- # set +x 00:08:57.928 22:00:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.928 22:00:39 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:57.928 22:00:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.928 22:00:39 -- common/autotest_common.sh@10 -- # set +x 00:08:57.928 22:00:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.928 22:00:39 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:57.928 22:00:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.928 22:00:39 -- common/autotest_common.sh@10 -- # set +x 00:08:57.928 22:00:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.928 22:00:40 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:57.928 22:00:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.928 22:00:40 -- target/referrals.sh@56 -- # jq length 00:08:57.928 22:00:40 -- common/autotest_common.sh@10 -- # set +x 00:08:57.928 22:00:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.928 22:00:40 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:57.928 22:00:40 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:57.929 22:00:40 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:57.929 22:00:40 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:57.929 22:00:40 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:57.929 22:00:40 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:57.929 22:00:40 -- target/referrals.sh@26 -- # sort 00:08:57.929 22:00:40 -- target/referrals.sh@26 -- # echo 00:08:57.929 22:00:40 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:57.929 22:00:40 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:57.929 22:00:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.929 22:00:40 -- common/autotest_common.sh@10 -- # set +x 00:08:57.929 22:00:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.929 22:00:40 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:57.929 22:00:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.929 22:00:40 -- common/autotest_common.sh@10 -- # set +x 00:08:57.929 22:00:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.929 22:00:40 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:57.929 22:00:40 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:57.929 22:00:40 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:57.929 22:00:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.929 22:00:40 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:57.929 22:00:40 -- target/referrals.sh@21 -- # sort 00:08:57.929 22:00:40 -- common/autotest_common.sh@10 -- # set +x 00:08:57.929 22:00:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.929 22:00:40 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:57.929 22:00:40 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:58.185 22:00:40 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:58.185 22:00:40 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:58.185 22:00:40 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:58.185 22:00:40 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.185 22:00:40 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:58.185 22:00:40 -- target/referrals.sh@26 -- # sort 00:08:58.185 22:00:40 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:58.185 22:00:40 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:58.185 22:00:40 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:58.185 22:00:40 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:58.185 22:00:40 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:58.185 22:00:40 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.185 22:00:40 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:58.185 22:00:40 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:58.185 22:00:40 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:58.185 22:00:40 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:58.185 22:00:40 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:58.185 22:00:40 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.185 22:00:40 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:58.443 22:00:40 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:58.443 22:00:40 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:58.443 22:00:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:58.443 22:00:40 -- common/autotest_common.sh@10 -- # set +x 00:08:58.443 22:00:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:58.443 22:00:40 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:58.443 22:00:40 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:58.443 22:00:40 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:58.443 22:00:40 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:58.443 22:00:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:58.443 22:00:40 -- common/autotest_common.sh@10 -- # set +x 00:08:58.443 22:00:40 -- target/referrals.sh@21 -- # sort 00:08:58.443 22:00:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:58.443 22:00:40 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:58.443 22:00:40 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:58.443 22:00:40 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:58.443 22:00:40 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:58.443 22:00:40 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:58.443 22:00:40 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.443 22:00:40 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:58.443 22:00:40 -- target/referrals.sh@26 -- # sort 00:08:58.443 22:00:40 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:58.443 22:00:40 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:58.443 22:00:40 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:58.443 22:00:40 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:58.443 22:00:40 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:58.443 22:00:40 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.443 22:00:40 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:58.443 22:00:40 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:58.443 22:00:40 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:58.443 22:00:40 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:58.443 22:00:40 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:58.443 22:00:40 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.443 22:00:40 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:58.701 22:00:40 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:58.701 22:00:40 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:58.701 22:00:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:58.701 22:00:40 -- common/autotest_common.sh@10 -- # set +x 00:08:58.701 22:00:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:58.701 22:00:40 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:58.701 22:00:40 -- target/referrals.sh@82 -- # jq length 00:08:58.701 22:00:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:58.701 22:00:40 -- common/autotest_common.sh@10 -- # set +x 00:08:58.701 22:00:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:58.701 22:00:40 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:58.701 22:00:40 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:58.701 22:00:40 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:58.701 22:00:40 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:58.701 22:00:40 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.701 22:00:40 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:58.701 22:00:40 -- target/referrals.sh@26 -- # sort 00:08:58.701 22:00:40 -- target/referrals.sh@26 -- # echo 00:08:58.701 22:00:40 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:58.701 22:00:40 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:58.701 22:00:40 -- target/referrals.sh@86 -- # nvmftestfini 00:08:58.701 22:00:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:58.701 22:00:40 -- nvmf/common.sh@117 -- # sync 00:08:58.959 22:00:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:58.959 22:00:40 -- nvmf/common.sh@120 -- # set +e 00:08:58.959 22:00:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:58.959 22:00:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:58.959 rmmod nvme_tcp 00:08:58.959 rmmod nvme_fabrics 00:08:58.959 rmmod nvme_keyring 00:08:58.959 22:00:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:58.959 22:00:40 -- nvmf/common.sh@124 -- # set -e 00:08:58.959 22:00:41 -- nvmf/common.sh@125 -- # return 0 00:08:58.959 22:00:41 -- nvmf/common.sh@478 -- # '[' -n 3865061 ']' 00:08:58.959 22:00:41 -- nvmf/common.sh@479 -- # killprocess 3865061 00:08:58.959 22:00:41 -- common/autotest_common.sh@936 -- # '[' -z 3865061 ']' 00:08:58.959 22:00:41 -- common/autotest_common.sh@940 -- # kill -0 3865061 00:08:58.959 22:00:41 -- common/autotest_common.sh@941 -- # uname 00:08:58.959 22:00:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:58.959 22:00:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3865061 00:08:58.959 22:00:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:58.959 22:00:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:58.959 22:00:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3865061' 00:08:58.959 killing process with pid 3865061 00:08:58.959 22:00:41 -- common/autotest_common.sh@955 -- # kill 3865061 00:08:58.959 [2024-04-24 22:00:41.050157] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:58.959 22:00:41 -- common/autotest_common.sh@960 -- # wait 3865061 00:08:59.218 22:00:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:59.218 22:00:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:59.218 22:00:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:59.218 22:00:41 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:59.218 22:00:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:59.218 22:00:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.218 22:00:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:59.218 22:00:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.148 22:00:43 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:01.148 00:09:01.148 real 0m6.711s 00:09:01.148 user 0m8.518s 00:09:01.148 sys 0m2.288s 00:09:01.148 22:00:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:01.148 22:00:43 -- common/autotest_common.sh@10 -- # set +x 00:09:01.148 ************************************ 00:09:01.148 END TEST nvmf_referrals 00:09:01.148 ************************************ 00:09:01.406 22:00:43 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:01.406 22:00:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:01.407 22:00:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:01.407 22:00:43 -- common/autotest_common.sh@10 -- # set +x 00:09:01.407 ************************************ 00:09:01.407 START TEST nvmf_connect_disconnect 00:09:01.407 ************************************ 00:09:01.407 22:00:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:01.407 * Looking for test storage... 00:09:01.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.407 22:00:43 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.407 22:00:43 -- nvmf/common.sh@7 -- # uname -s 00:09:01.407 22:00:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.407 22:00:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.407 22:00:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.407 22:00:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.407 22:00:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.407 22:00:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.407 22:00:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.407 22:00:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.407 22:00:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.407 22:00:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.407 22:00:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:01.407 22:00:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:01.407 22:00:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.407 22:00:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.407 22:00:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.407 22:00:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.407 22:00:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.407 22:00:43 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.407 22:00:43 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.407 22:00:43 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.407 22:00:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.407 22:00:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.407 22:00:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.407 22:00:43 -- paths/export.sh@5 -- # export PATH 00:09:01.407 22:00:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.407 22:00:43 -- nvmf/common.sh@47 -- # : 0 00:09:01.407 22:00:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:01.407 22:00:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:01.407 22:00:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.407 22:00:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.407 22:00:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.407 22:00:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:01.407 22:00:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:01.407 22:00:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:01.407 22:00:43 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:01.407 22:00:43 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:01.407 22:00:43 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:01.407 22:00:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:01.407 22:00:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.407 22:00:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:01.407 22:00:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:01.407 22:00:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:01.407 22:00:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.407 22:00:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.407 22:00:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.407 22:00:43 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:01.407 22:00:43 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:01.407 22:00:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:01.407 22:00:43 -- common/autotest_common.sh@10 -- # set +x 00:09:03.936 22:00:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:03.936 22:00:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:03.936 22:00:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:03.936 22:00:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:03.936 22:00:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:03.936 22:00:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:03.936 22:00:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:03.936 22:00:45 -- nvmf/common.sh@295 -- # net_devs=() 00:09:03.936 22:00:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:03.936 22:00:45 -- nvmf/common.sh@296 -- # e810=() 00:09:03.936 22:00:45 -- nvmf/common.sh@296 -- # local -ga e810 00:09:03.936 22:00:45 -- nvmf/common.sh@297 -- # x722=() 00:09:03.936 22:00:45 -- nvmf/common.sh@297 -- # local -ga x722 00:09:03.936 22:00:45 -- nvmf/common.sh@298 -- # mlx=() 00:09:03.936 22:00:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:03.936 22:00:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:03.936 22:00:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:03.936 22:00:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:03.936 22:00:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:03.936 22:00:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:03.936 22:00:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:03.936 22:00:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:03.936 22:00:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:03.936 22:00:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:03.936 22:00:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:03.936 22:00:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:03.936 22:00:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:03.936 22:00:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:03.936 22:00:45 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:03.936 22:00:45 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:03.936 22:00:45 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:03.936 22:00:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:03.936 22:00:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:03.936 22:00:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:03.936 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:03.936 22:00:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:03.936 22:00:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:03.936 22:00:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.936 22:00:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.936 22:00:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:03.936 22:00:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:03.936 22:00:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:03.936 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:03.936 22:00:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:03.936 22:00:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:03.936 22:00:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.936 22:00:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.936 22:00:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:03.936 22:00:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:03.936 22:00:45 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:03.936 22:00:45 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:03.936 22:00:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:03.936 22:00:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.936 22:00:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:03.936 22:00:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.936 22:00:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:03.936 Found net devices under 0000:84:00.0: cvl_0_0 00:09:03.936 22:00:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.936 22:00:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:03.936 22:00:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.936 22:00:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:03.936 22:00:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.936 22:00:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:03.936 Found net devices under 0000:84:00.1: cvl_0_1 00:09:03.936 22:00:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.936 22:00:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:03.936 22:00:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:03.936 22:00:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:03.936 22:00:45 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:03.936 22:00:45 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:03.936 22:00:45 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:03.936 22:00:45 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:03.936 22:00:45 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:03.936 22:00:45 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:03.936 22:00:45 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:03.936 22:00:45 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:03.936 22:00:45 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:03.936 22:00:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:03.936 22:00:45 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:03.936 22:00:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:03.936 22:00:45 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:03.936 22:00:45 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:03.936 22:00:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:03.936 22:00:45 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:03.936 22:00:45 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:03.936 22:00:45 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:03.936 22:00:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:03.936 22:00:46 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:03.937 22:00:46 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:03.937 22:00:46 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:03.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:03.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:09:03.937 00:09:03.937 --- 10.0.0.2 ping statistics --- 00:09:03.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.937 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:09:03.937 22:00:46 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:03.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:03.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:09:03.937 00:09:03.937 --- 10.0.0.1 ping statistics --- 00:09:03.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.937 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:09:03.937 22:00:46 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:03.937 22:00:46 -- nvmf/common.sh@411 -- # return 0 00:09:03.937 22:00:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:03.937 22:00:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:03.937 22:00:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:03.937 22:00:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:03.937 22:00:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:03.937 22:00:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:03.937 22:00:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:03.937 22:00:46 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:03.937 22:00:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:03.937 22:00:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:03.937 22:00:46 -- common/autotest_common.sh@10 -- # set +x 00:09:03.937 22:00:46 -- nvmf/common.sh@470 -- # nvmfpid=3867373 00:09:03.937 22:00:46 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:03.937 22:00:46 -- nvmf/common.sh@471 -- # waitforlisten 3867373 00:09:03.937 22:00:46 -- common/autotest_common.sh@817 -- # '[' -z 3867373 ']' 00:09:03.937 22:00:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.937 22:00:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:03.937 22:00:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.937 22:00:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:03.937 22:00:46 -- common/autotest_common.sh@10 -- # set +x 00:09:03.937 [2024-04-24 22:00:46.137631] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:09:03.937 [2024-04-24 22:00:46.137737] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.937 EAL: No free 2048 kB hugepages reported on node 1 00:09:04.195 [2024-04-24 22:00:46.225862] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:04.195 [2024-04-24 22:00:46.347602] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.195 [2024-04-24 22:00:46.347666] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.195 [2024-04-24 22:00:46.347682] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:04.195 [2024-04-24 22:00:46.347695] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:04.195 [2024-04-24 22:00:46.347708] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.195 [2024-04-24 22:00:46.347768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.195 [2024-04-24 22:00:46.347823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.195 [2024-04-24 22:00:46.347875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:04.195 [2024-04-24 22:00:46.347878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.453 22:00:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:04.453 22:00:46 -- common/autotest_common.sh@850 -- # return 0 00:09:04.453 22:00:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:04.453 22:00:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:04.453 22:00:46 -- common/autotest_common.sh@10 -- # set +x 00:09:04.454 22:00:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:04.454 22:00:46 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:04.454 22:00:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:04.454 22:00:46 -- common/autotest_common.sh@10 -- # set +x 00:09:04.454 [2024-04-24 22:00:46.517551] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:04.454 22:00:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:04.454 22:00:46 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:04.454 22:00:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:04.454 22:00:46 -- common/autotest_common.sh@10 -- # set +x 00:09:04.454 22:00:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:04.454 22:00:46 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:04.454 22:00:46 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:04.454 22:00:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:04.454 22:00:46 -- common/autotest_common.sh@10 -- # set +x 00:09:04.454 22:00:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:04.454 22:00:46 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:04.454 22:00:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:04.454 22:00:46 -- common/autotest_common.sh@10 -- # set +x 00:09:04.454 22:00:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:04.454 22:00:46 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:04.454 22:00:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:04.454 22:00:46 -- common/autotest_common.sh@10 -- # set +x 00:09:04.454 [2024-04-24 22:00:46.574822] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:04.454 [2024-04-24 22:00:46.575174] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:04.454 22:00:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:04.454 22:00:46 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:09:04.454 22:00:46 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:09:04.454 22:00:46 -- target/connect_disconnect.sh@34 -- # set +x 00:09:07.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.624 22:01:00 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:18.624 22:01:00 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:18.624 22:01:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:18.624 22:01:00 -- nvmf/common.sh@117 -- # sync 00:09:18.624 22:01:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:18.624 22:01:00 -- nvmf/common.sh@120 -- # set +e 00:09:18.624 22:01:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:18.624 22:01:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:18.624 rmmod nvme_tcp 00:09:18.624 rmmod nvme_fabrics 00:09:18.624 rmmod nvme_keyring 00:09:18.624 22:01:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:18.624 22:01:00 -- nvmf/common.sh@124 -- # set -e 00:09:18.624 22:01:00 -- nvmf/common.sh@125 -- # return 0 00:09:18.624 22:01:00 -- nvmf/common.sh@478 -- # '[' -n 3867373 ']' 00:09:18.624 22:01:00 -- nvmf/common.sh@479 -- # killprocess 3867373 00:09:18.624 22:01:00 -- common/autotest_common.sh@936 -- # '[' -z 3867373 ']' 00:09:18.624 22:01:00 -- common/autotest_common.sh@940 -- # kill -0 3867373 00:09:18.624 22:01:00 -- common/autotest_common.sh@941 -- # uname 00:09:18.624 22:01:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:18.624 22:01:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3867373 00:09:18.624 22:01:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:18.624 22:01:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:18.624 22:01:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3867373' 00:09:18.624 killing process with pid 3867373 00:09:18.624 22:01:00 -- common/autotest_common.sh@955 -- # kill 3867373 00:09:18.624 [2024-04-24 22:01:00.391077] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:18.624 22:01:00 -- common/autotest_common.sh@960 -- # wait 3867373 00:09:18.624 22:01:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:18.624 22:01:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:18.624 22:01:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:18.624 22:01:00 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:18.624 22:01:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:18.624 22:01:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.624 22:01:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.625 22:01:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.559 22:01:02 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:20.559 00:09:20.559 real 0m19.216s 00:09:20.559 user 0m57.011s 00:09:20.559 sys 0m3.311s 00:09:20.559 22:01:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:20.559 22:01:02 -- common/autotest_common.sh@10 -- # set +x 00:09:20.559 ************************************ 00:09:20.559 END TEST nvmf_connect_disconnect 00:09:20.559 ************************************ 00:09:20.559 22:01:02 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:20.559 22:01:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:20.559 22:01:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:20.559 22:01:02 -- common/autotest_common.sh@10 -- # set +x 00:09:20.817 ************************************ 00:09:20.817 START TEST nvmf_multitarget 00:09:20.817 ************************************ 00:09:20.817 22:01:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:20.817 * Looking for test storage... 00:09:20.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:20.817 22:01:02 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.817 22:01:02 -- nvmf/common.sh@7 -- # uname -s 00:09:20.817 22:01:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.817 22:01:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.817 22:01:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.817 22:01:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.817 22:01:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.817 22:01:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.817 22:01:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.817 22:01:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.817 22:01:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.817 22:01:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.817 22:01:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:20.817 22:01:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:20.817 22:01:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.817 22:01:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.817 22:01:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:20.817 22:01:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.817 22:01:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:20.817 22:01:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.817 22:01:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.817 22:01:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.817 22:01:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.817 22:01:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.817 22:01:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.817 22:01:02 -- paths/export.sh@5 -- # export PATH 00:09:20.817 22:01:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.817 22:01:02 -- nvmf/common.sh@47 -- # : 0 00:09:20.817 22:01:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:20.817 22:01:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:20.817 22:01:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.817 22:01:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.817 22:01:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.817 22:01:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:20.817 22:01:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:20.817 22:01:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:20.818 22:01:02 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:20.818 22:01:02 -- target/multitarget.sh@15 -- # nvmftestinit 00:09:20.818 22:01:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:20.818 22:01:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.818 22:01:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:20.818 22:01:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:20.818 22:01:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:20.818 22:01:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.818 22:01:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:20.818 22:01:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.818 22:01:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:20.818 22:01:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:20.818 22:01:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:20.818 22:01:02 -- common/autotest_common.sh@10 -- # set +x 00:09:23.345 22:01:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:23.345 22:01:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:23.345 22:01:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:23.345 22:01:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:23.345 22:01:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:23.345 22:01:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:23.345 22:01:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:23.345 22:01:05 -- nvmf/common.sh@295 -- # net_devs=() 00:09:23.345 22:01:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:23.345 22:01:05 -- nvmf/common.sh@296 -- # e810=() 00:09:23.345 22:01:05 -- nvmf/common.sh@296 -- # local -ga e810 00:09:23.345 22:01:05 -- nvmf/common.sh@297 -- # x722=() 00:09:23.345 22:01:05 -- nvmf/common.sh@297 -- # local -ga x722 00:09:23.345 22:01:05 -- nvmf/common.sh@298 -- # mlx=() 00:09:23.345 22:01:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:23.345 22:01:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:23.345 22:01:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:23.345 22:01:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:23.345 22:01:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:23.345 22:01:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:23.346 22:01:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:23.346 22:01:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:23.346 22:01:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:23.346 22:01:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:23.346 22:01:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:23.346 22:01:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:23.346 22:01:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:23.346 22:01:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:23.346 22:01:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:23.346 22:01:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:23.346 22:01:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:23.346 22:01:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:23.346 22:01:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:23.346 22:01:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:23.346 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:23.346 22:01:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:23.346 22:01:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:23.346 22:01:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.346 22:01:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.346 22:01:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:23.346 22:01:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:23.346 22:01:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:23.346 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:23.346 22:01:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:23.346 22:01:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:23.346 22:01:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.346 22:01:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.346 22:01:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:23.346 22:01:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:23.346 22:01:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:23.346 22:01:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:23.346 22:01:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:23.346 22:01:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.346 22:01:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:23.346 22:01:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.346 22:01:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:23.346 Found net devices under 0000:84:00.0: cvl_0_0 00:09:23.346 22:01:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.346 22:01:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:23.346 22:01:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.346 22:01:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:23.346 22:01:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.346 22:01:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:23.346 Found net devices under 0000:84:00.1: cvl_0_1 00:09:23.346 22:01:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.346 22:01:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:23.346 22:01:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:23.346 22:01:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:23.346 22:01:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:23.346 22:01:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:23.346 22:01:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.346 22:01:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:23.346 22:01:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:23.346 22:01:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:23.346 22:01:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:23.346 22:01:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:23.346 22:01:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:23.346 22:01:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:23.346 22:01:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.346 22:01:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:23.346 22:01:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:23.346 22:01:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:23.346 22:01:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:23.346 22:01:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:23.346 22:01:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:23.346 22:01:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:23.346 22:01:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:23.346 22:01:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:23.346 22:01:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:23.346 22:01:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:23.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:23.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:09:23.346 00:09:23.346 --- 10.0.0.2 ping statistics --- 00:09:23.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.346 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:09:23.346 22:01:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:23.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:23.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:09:23.346 00:09:23.346 --- 10.0.0.1 ping statistics --- 00:09:23.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.346 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:09:23.346 22:01:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:23.346 22:01:05 -- nvmf/common.sh@411 -- # return 0 00:09:23.346 22:01:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:23.346 22:01:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:23.346 22:01:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:23.346 22:01:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:23.346 22:01:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:23.346 22:01:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:23.346 22:01:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:23.346 22:01:05 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:23.346 22:01:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:23.346 22:01:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:23.346 22:01:05 -- common/autotest_common.sh@10 -- # set +x 00:09:23.346 22:01:05 -- nvmf/common.sh@470 -- # nvmfpid=3871156 00:09:23.346 22:01:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:23.346 22:01:05 -- nvmf/common.sh@471 -- # waitforlisten 3871156 00:09:23.346 22:01:05 -- common/autotest_common.sh@817 -- # '[' -z 3871156 ']' 00:09:23.346 22:01:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.346 22:01:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:23.346 22:01:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.346 22:01:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:23.346 22:01:05 -- common/autotest_common.sh@10 -- # set +x 00:09:23.346 [2024-04-24 22:01:05.444292] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:09:23.346 [2024-04-24 22:01:05.444384] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.346 EAL: No free 2048 kB hugepages reported on node 1 00:09:23.346 [2024-04-24 22:01:05.521305] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:23.604 [2024-04-24 22:01:05.646498] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.604 [2024-04-24 22:01:05.646566] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.604 [2024-04-24 22:01:05.646582] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.604 [2024-04-24 22:01:05.646595] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.604 [2024-04-24 22:01:05.646607] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.604 [2024-04-24 22:01:05.646696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.604 [2024-04-24 22:01:05.646751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:23.604 [2024-04-24 22:01:05.646803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:23.604 [2024-04-24 22:01:05.646807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.604 22:01:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:23.604 22:01:05 -- common/autotest_common.sh@850 -- # return 0 00:09:23.604 22:01:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:23.604 22:01:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:23.604 22:01:05 -- common/autotest_common.sh@10 -- # set +x 00:09:23.605 22:01:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.605 22:01:05 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:23.605 22:01:05 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:23.605 22:01:05 -- target/multitarget.sh@21 -- # jq length 00:09:23.862 22:01:05 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:23.862 22:01:05 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:24.119 "nvmf_tgt_1" 00:09:24.119 22:01:06 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:24.119 "nvmf_tgt_2" 00:09:24.119 22:01:06 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:24.119 22:01:06 -- target/multitarget.sh@28 -- # jq length 00:09:24.377 22:01:06 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:24.377 22:01:06 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:24.377 true 00:09:24.635 22:01:06 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:24.635 true 00:09:24.635 22:01:06 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:24.635 22:01:06 -- target/multitarget.sh@35 -- # jq length 00:09:24.635 22:01:06 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:24.635 22:01:06 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:24.635 22:01:06 -- target/multitarget.sh@41 -- # nvmftestfini 00:09:24.635 22:01:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:24.635 22:01:06 -- nvmf/common.sh@117 -- # sync 00:09:24.893 22:01:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:24.893 22:01:06 -- nvmf/common.sh@120 -- # set +e 00:09:24.893 22:01:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:24.893 22:01:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:24.893 rmmod nvme_tcp 00:09:24.893 rmmod nvme_fabrics 00:09:24.893 rmmod nvme_keyring 00:09:24.893 22:01:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:24.893 22:01:06 -- nvmf/common.sh@124 -- # set -e 00:09:24.893 22:01:06 -- nvmf/common.sh@125 -- # return 0 00:09:24.893 22:01:06 -- nvmf/common.sh@478 -- # '[' -n 3871156 ']' 00:09:24.893 22:01:06 -- nvmf/common.sh@479 -- # killprocess 3871156 00:09:24.893 22:01:06 -- common/autotest_common.sh@936 -- # '[' -z 3871156 ']' 00:09:24.893 22:01:06 -- common/autotest_common.sh@940 -- # kill -0 3871156 00:09:24.893 22:01:06 -- common/autotest_common.sh@941 -- # uname 00:09:24.893 22:01:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:24.893 22:01:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3871156 00:09:24.893 22:01:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:24.893 22:01:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:24.893 22:01:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3871156' 00:09:24.893 killing process with pid 3871156 00:09:24.893 22:01:06 -- common/autotest_common.sh@955 -- # kill 3871156 00:09:24.893 22:01:06 -- common/autotest_common.sh@960 -- # wait 3871156 00:09:25.151 22:01:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:25.151 22:01:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:25.151 22:01:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:25.151 22:01:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:25.151 22:01:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:25.151 22:01:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.151 22:01:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:25.151 22:01:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.679 22:01:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:27.679 00:09:27.679 real 0m6.428s 00:09:27.679 user 0m8.109s 00:09:27.679 sys 0m2.272s 00:09:27.679 22:01:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:27.679 22:01:09 -- common/autotest_common.sh@10 -- # set +x 00:09:27.679 ************************************ 00:09:27.679 END TEST nvmf_multitarget 00:09:27.679 ************************************ 00:09:27.679 22:01:09 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:27.679 22:01:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:27.679 22:01:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:27.679 22:01:09 -- common/autotest_common.sh@10 -- # set +x 00:09:27.679 ************************************ 00:09:27.680 START TEST nvmf_rpc 00:09:27.680 ************************************ 00:09:27.680 22:01:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:27.680 * Looking for test storage... 00:09:27.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:27.680 22:01:09 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.680 22:01:09 -- nvmf/common.sh@7 -- # uname -s 00:09:27.680 22:01:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.680 22:01:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.680 22:01:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.680 22:01:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.680 22:01:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.680 22:01:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.680 22:01:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.680 22:01:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.680 22:01:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.680 22:01:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.680 22:01:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:27.680 22:01:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:27.680 22:01:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.680 22:01:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.680 22:01:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.680 22:01:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.680 22:01:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:27.680 22:01:09 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.680 22:01:09 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.680 22:01:09 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.680 22:01:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.680 22:01:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.680 22:01:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.680 22:01:09 -- paths/export.sh@5 -- # export PATH 00:09:27.680 22:01:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.680 22:01:09 -- nvmf/common.sh@47 -- # : 0 00:09:27.680 22:01:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:27.680 22:01:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:27.680 22:01:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.680 22:01:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.680 22:01:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.680 22:01:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:27.680 22:01:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:27.680 22:01:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:27.680 22:01:09 -- target/rpc.sh@11 -- # loops=5 00:09:27.680 22:01:09 -- target/rpc.sh@23 -- # nvmftestinit 00:09:27.680 22:01:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:27.680 22:01:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.680 22:01:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:27.680 22:01:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:27.680 22:01:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:27.680 22:01:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.680 22:01:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:27.680 22:01:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.680 22:01:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:27.680 22:01:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:27.680 22:01:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:27.680 22:01:09 -- common/autotest_common.sh@10 -- # set +x 00:09:29.580 22:01:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:29.580 22:01:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:29.580 22:01:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:29.580 22:01:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:29.580 22:01:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:29.580 22:01:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:29.580 22:01:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:29.580 22:01:11 -- nvmf/common.sh@295 -- # net_devs=() 00:09:29.580 22:01:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:29.580 22:01:11 -- nvmf/common.sh@296 -- # e810=() 00:09:29.580 22:01:11 -- nvmf/common.sh@296 -- # local -ga e810 00:09:29.580 22:01:11 -- nvmf/common.sh@297 -- # x722=() 00:09:29.580 22:01:11 -- nvmf/common.sh@297 -- # local -ga x722 00:09:29.580 22:01:11 -- nvmf/common.sh@298 -- # mlx=() 00:09:29.580 22:01:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:29.580 22:01:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:29.580 22:01:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:29.580 22:01:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:29.580 22:01:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:29.580 22:01:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:29.580 22:01:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:29.580 22:01:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:29.580 22:01:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:29.580 22:01:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:29.580 22:01:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:29.580 22:01:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:29.580 22:01:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:29.580 22:01:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:29.580 22:01:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:29.580 22:01:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:29.580 22:01:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:29.580 22:01:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:29.580 22:01:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:29.580 22:01:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:29.580 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:29.580 22:01:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:29.580 22:01:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:29.580 22:01:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.580 22:01:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.580 22:01:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:29.580 22:01:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:29.580 22:01:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:29.580 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:29.581 22:01:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:29.581 22:01:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:29.581 22:01:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.581 22:01:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.581 22:01:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:29.581 22:01:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:29.581 22:01:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:29.581 22:01:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:29.581 22:01:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:29.581 22:01:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.581 22:01:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:29.581 22:01:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.581 22:01:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:29.581 Found net devices under 0000:84:00.0: cvl_0_0 00:09:29.581 22:01:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.581 22:01:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:29.581 22:01:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.581 22:01:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:29.581 22:01:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.581 22:01:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:29.581 Found net devices under 0000:84:00.1: cvl_0_1 00:09:29.581 22:01:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.581 22:01:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:29.581 22:01:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:29.581 22:01:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:29.581 22:01:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:29.581 22:01:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:29.581 22:01:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.581 22:01:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:29.581 22:01:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:29.581 22:01:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:29.581 22:01:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:29.581 22:01:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:29.581 22:01:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:29.581 22:01:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:29.581 22:01:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.581 22:01:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:29.581 22:01:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:29.581 22:01:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:29.581 22:01:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:29.581 22:01:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:29.581 22:01:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:29.839 22:01:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:29.839 22:01:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:29.839 22:01:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:29.839 22:01:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:29.839 22:01:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:29.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:09:29.839 00:09:29.839 --- 10.0.0.2 ping statistics --- 00:09:29.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.839 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:09:29.839 22:01:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:29.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:09:29.839 00:09:29.839 --- 10.0.0.1 ping statistics --- 00:09:29.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.839 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:09:29.839 22:01:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.839 22:01:11 -- nvmf/common.sh@411 -- # return 0 00:09:29.839 22:01:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:29.839 22:01:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.839 22:01:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:29.839 22:01:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:29.839 22:01:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.839 22:01:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:29.839 22:01:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:29.839 22:01:11 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:29.839 22:01:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:29.839 22:01:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:29.839 22:01:11 -- common/autotest_common.sh@10 -- # set +x 00:09:29.839 22:01:11 -- nvmf/common.sh@470 -- # nvmfpid=3873304 00:09:29.839 22:01:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:29.839 22:01:11 -- nvmf/common.sh@471 -- # waitforlisten 3873304 00:09:29.839 22:01:11 -- common/autotest_common.sh@817 -- # '[' -z 3873304 ']' 00:09:29.839 22:01:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.839 22:01:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:29.839 22:01:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.839 22:01:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:29.839 22:01:11 -- common/autotest_common.sh@10 -- # set +x 00:09:29.839 [2024-04-24 22:01:11.978320] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:09:29.839 [2024-04-24 22:01:11.978420] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.839 EAL: No free 2048 kB hugepages reported on node 1 00:09:29.839 [2024-04-24 22:01:12.059987] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:30.098 [2024-04-24 22:01:12.184927] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.098 [2024-04-24 22:01:12.184994] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.098 [2024-04-24 22:01:12.185010] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.098 [2024-04-24 22:01:12.185023] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.098 [2024-04-24 22:01:12.185035] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.098 [2024-04-24 22:01:12.185138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.098 [2024-04-24 22:01:12.185212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.098 [2024-04-24 22:01:12.185270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:30.098 [2024-04-24 22:01:12.185273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.098 22:01:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:30.098 22:01:12 -- common/autotest_common.sh@850 -- # return 0 00:09:30.098 22:01:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:30.098 22:01:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:30.098 22:01:12 -- common/autotest_common.sh@10 -- # set +x 00:09:30.098 22:01:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.098 22:01:12 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:30.098 22:01:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.098 22:01:12 -- common/autotest_common.sh@10 -- # set +x 00:09:30.356 22:01:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.356 22:01:12 -- target/rpc.sh@26 -- # stats='{ 00:09:30.356 "tick_rate": 2700000000, 00:09:30.356 "poll_groups": [ 00:09:30.356 { 00:09:30.356 "name": "nvmf_tgt_poll_group_0", 00:09:30.356 "admin_qpairs": 0, 00:09:30.356 "io_qpairs": 0, 00:09:30.356 "current_admin_qpairs": 0, 00:09:30.356 "current_io_qpairs": 0, 00:09:30.356 "pending_bdev_io": 0, 00:09:30.356 "completed_nvme_io": 0, 00:09:30.356 "transports": [] 00:09:30.356 }, 00:09:30.356 { 00:09:30.356 "name": "nvmf_tgt_poll_group_1", 00:09:30.356 "admin_qpairs": 0, 00:09:30.356 "io_qpairs": 0, 00:09:30.356 "current_admin_qpairs": 0, 00:09:30.356 "current_io_qpairs": 0, 00:09:30.356 "pending_bdev_io": 0, 00:09:30.356 "completed_nvme_io": 0, 00:09:30.356 "transports": [] 00:09:30.356 }, 00:09:30.356 { 00:09:30.356 "name": "nvmf_tgt_poll_group_2", 00:09:30.356 "admin_qpairs": 0, 00:09:30.356 "io_qpairs": 0, 00:09:30.356 "current_admin_qpairs": 0, 00:09:30.356 "current_io_qpairs": 0, 00:09:30.356 "pending_bdev_io": 0, 00:09:30.356 "completed_nvme_io": 0, 00:09:30.356 "transports": [] 00:09:30.356 }, 00:09:30.356 { 00:09:30.356 "name": "nvmf_tgt_poll_group_3", 00:09:30.356 "admin_qpairs": 0, 00:09:30.356 "io_qpairs": 0, 00:09:30.356 "current_admin_qpairs": 0, 00:09:30.356 "current_io_qpairs": 0, 00:09:30.356 "pending_bdev_io": 0, 00:09:30.356 "completed_nvme_io": 0, 00:09:30.356 "transports": [] 00:09:30.356 } 00:09:30.356 ] 00:09:30.356 }' 00:09:30.356 22:01:12 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:30.356 22:01:12 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:30.356 22:01:12 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:30.356 22:01:12 -- target/rpc.sh@15 -- # wc -l 00:09:30.356 22:01:12 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:30.356 22:01:12 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:30.356 22:01:12 -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:30.356 22:01:12 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:30.356 22:01:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.356 22:01:12 -- common/autotest_common.sh@10 -- # set +x 00:09:30.356 [2024-04-24 22:01:12.453697] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.356 22:01:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.356 22:01:12 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:30.356 22:01:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.356 22:01:12 -- common/autotest_common.sh@10 -- # set +x 00:09:30.356 22:01:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.356 22:01:12 -- target/rpc.sh@33 -- # stats='{ 00:09:30.356 "tick_rate": 2700000000, 00:09:30.356 "poll_groups": [ 00:09:30.356 { 00:09:30.356 "name": "nvmf_tgt_poll_group_0", 00:09:30.356 "admin_qpairs": 0, 00:09:30.356 "io_qpairs": 0, 00:09:30.356 "current_admin_qpairs": 0, 00:09:30.356 "current_io_qpairs": 0, 00:09:30.356 "pending_bdev_io": 0, 00:09:30.356 "completed_nvme_io": 0, 00:09:30.356 "transports": [ 00:09:30.356 { 00:09:30.356 "trtype": "TCP" 00:09:30.356 } 00:09:30.356 ] 00:09:30.356 }, 00:09:30.356 { 00:09:30.356 "name": "nvmf_tgt_poll_group_1", 00:09:30.356 "admin_qpairs": 0, 00:09:30.356 "io_qpairs": 0, 00:09:30.356 "current_admin_qpairs": 0, 00:09:30.356 "current_io_qpairs": 0, 00:09:30.356 "pending_bdev_io": 0, 00:09:30.356 "completed_nvme_io": 0, 00:09:30.356 "transports": [ 00:09:30.356 { 00:09:30.356 "trtype": "TCP" 00:09:30.356 } 00:09:30.356 ] 00:09:30.356 }, 00:09:30.356 { 00:09:30.356 "name": "nvmf_tgt_poll_group_2", 00:09:30.356 "admin_qpairs": 0, 00:09:30.356 "io_qpairs": 0, 00:09:30.356 "current_admin_qpairs": 0, 00:09:30.356 "current_io_qpairs": 0, 00:09:30.356 "pending_bdev_io": 0, 00:09:30.356 "completed_nvme_io": 0, 00:09:30.356 "transports": [ 00:09:30.356 { 00:09:30.356 "trtype": "TCP" 00:09:30.356 } 00:09:30.356 ] 00:09:30.356 }, 00:09:30.356 { 00:09:30.356 "name": "nvmf_tgt_poll_group_3", 00:09:30.356 "admin_qpairs": 0, 00:09:30.356 "io_qpairs": 0, 00:09:30.356 "current_admin_qpairs": 0, 00:09:30.356 "current_io_qpairs": 0, 00:09:30.356 "pending_bdev_io": 0, 00:09:30.356 "completed_nvme_io": 0, 00:09:30.356 "transports": [ 00:09:30.356 { 00:09:30.356 "trtype": "TCP" 00:09:30.356 } 00:09:30.356 ] 00:09:30.356 } 00:09:30.356 ] 00:09:30.356 }' 00:09:30.356 22:01:12 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:30.356 22:01:12 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:30.356 22:01:12 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:30.356 22:01:12 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:30.356 22:01:12 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:30.356 22:01:12 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:30.356 22:01:12 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:30.356 22:01:12 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:30.356 22:01:12 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:30.356 22:01:12 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:30.356 22:01:12 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:30.356 22:01:12 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:30.356 22:01:12 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:30.356 22:01:12 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:30.356 22:01:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.356 22:01:12 -- common/autotest_common.sh@10 -- # set +x 00:09:30.615 Malloc1 00:09:30.615 22:01:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.615 22:01:12 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:30.615 22:01:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.615 22:01:12 -- common/autotest_common.sh@10 -- # set +x 00:09:30.615 22:01:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.615 22:01:12 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.615 22:01:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.615 22:01:12 -- common/autotest_common.sh@10 -- # set +x 00:09:30.615 22:01:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.615 22:01:12 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:30.615 22:01:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.615 22:01:12 -- common/autotest_common.sh@10 -- # set +x 00:09:30.615 22:01:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.615 22:01:12 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.615 22:01:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.615 22:01:12 -- common/autotest_common.sh@10 -- # set +x 00:09:30.615 [2024-04-24 22:01:12.660470] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:30.615 [2024-04-24 22:01:12.660814] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.615 22:01:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.615 22:01:12 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:09:30.615 22:01:12 -- common/autotest_common.sh@638 -- # local es=0 00:09:30.615 22:01:12 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:09:30.615 22:01:12 -- common/autotest_common.sh@626 -- # local arg=nvme 00:09:30.615 22:01:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:30.615 22:01:12 -- common/autotest_common.sh@630 -- # type -t nvme 00:09:30.615 22:01:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:30.615 22:01:12 -- common/autotest_common.sh@632 -- # type -P nvme 00:09:30.615 22:01:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:30.615 22:01:12 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:09:30.615 22:01:12 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:09:30.615 22:01:12 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:09:30.615 [2024-04-24 22:01:12.683319] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:09:30.615 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:30.615 could not add new controller: failed to write to nvme-fabrics device 00:09:30.615 22:01:12 -- common/autotest_common.sh@641 -- # es=1 00:09:30.615 22:01:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:30.615 22:01:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:30.615 22:01:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:30.615 22:01:12 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:30.615 22:01:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.615 22:01:12 -- common/autotest_common.sh@10 -- # set +x 00:09:30.615 22:01:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.615 22:01:12 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:31.181 22:01:13 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:31.181 22:01:13 -- common/autotest_common.sh@1184 -- # local i=0 00:09:31.181 22:01:13 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:31.181 22:01:13 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:31.181 22:01:13 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:33.104 22:01:15 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:33.104 22:01:15 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:33.104 22:01:15 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:33.104 22:01:15 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:33.104 22:01:15 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:33.104 22:01:15 -- common/autotest_common.sh@1194 -- # return 0 00:09:33.104 22:01:15 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:33.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.362 22:01:15 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:33.362 22:01:15 -- common/autotest_common.sh@1205 -- # local i=0 00:09:33.362 22:01:15 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:33.362 22:01:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.362 22:01:15 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:33.362 22:01:15 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.362 22:01:15 -- common/autotest_common.sh@1217 -- # return 0 00:09:33.362 22:01:15 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:33.362 22:01:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.362 22:01:15 -- common/autotest_common.sh@10 -- # set +x 00:09:33.362 22:01:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.363 22:01:15 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:33.363 22:01:15 -- common/autotest_common.sh@638 -- # local es=0 00:09:33.363 22:01:15 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:33.363 22:01:15 -- common/autotest_common.sh@626 -- # local arg=nvme 00:09:33.363 22:01:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:33.363 22:01:15 -- common/autotest_common.sh@630 -- # type -t nvme 00:09:33.363 22:01:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:33.363 22:01:15 -- common/autotest_common.sh@632 -- # type -P nvme 00:09:33.363 22:01:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:33.363 22:01:15 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:09:33.363 22:01:15 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:09:33.363 22:01:15 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:33.363 [2024-04-24 22:01:15.422671] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:09:33.363 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:33.363 could not add new controller: failed to write to nvme-fabrics device 00:09:33.363 22:01:15 -- common/autotest_common.sh@641 -- # es=1 00:09:33.363 22:01:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:33.363 22:01:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:33.363 22:01:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:33.363 22:01:15 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:33.363 22:01:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.363 22:01:15 -- common/autotest_common.sh@10 -- # set +x 00:09:33.363 22:01:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.363 22:01:15 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:33.929 22:01:15 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:33.929 22:01:15 -- common/autotest_common.sh@1184 -- # local i=0 00:09:33.929 22:01:15 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:33.929 22:01:15 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:33.929 22:01:15 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:35.831 22:01:17 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:35.831 22:01:17 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:35.831 22:01:17 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:35.831 22:01:18 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:35.831 22:01:18 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:35.831 22:01:18 -- common/autotest_common.sh@1194 -- # return 0 00:09:35.831 22:01:18 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:36.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.090 22:01:18 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:36.090 22:01:18 -- common/autotest_common.sh@1205 -- # local i=0 00:09:36.090 22:01:18 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:36.090 22:01:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:36.090 22:01:18 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:36.090 22:01:18 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:36.090 22:01:18 -- common/autotest_common.sh@1217 -- # return 0 00:09:36.090 22:01:18 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.090 22:01:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.090 22:01:18 -- common/autotest_common.sh@10 -- # set +x 00:09:36.090 22:01:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.090 22:01:18 -- target/rpc.sh@81 -- # seq 1 5 00:09:36.090 22:01:18 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:36.090 22:01:18 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:36.090 22:01:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.090 22:01:18 -- common/autotest_common.sh@10 -- # set +x 00:09:36.090 22:01:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.090 22:01:18 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:36.090 22:01:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.090 22:01:18 -- common/autotest_common.sh@10 -- # set +x 00:09:36.090 [2024-04-24 22:01:18.138251] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.090 22:01:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.090 22:01:18 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:36.090 22:01:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.090 22:01:18 -- common/autotest_common.sh@10 -- # set +x 00:09:36.090 22:01:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.090 22:01:18 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:36.090 22:01:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.090 22:01:18 -- common/autotest_common.sh@10 -- # set +x 00:09:36.090 22:01:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.090 22:01:18 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:36.655 22:01:18 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:36.655 22:01:18 -- common/autotest_common.sh@1184 -- # local i=0 00:09:36.656 22:01:18 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:36.656 22:01:18 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:36.656 22:01:18 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:38.555 22:01:20 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:38.555 22:01:20 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:38.555 22:01:20 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:38.555 22:01:20 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:38.555 22:01:20 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:38.555 22:01:20 -- common/autotest_common.sh@1194 -- # return 0 00:09:38.555 22:01:20 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:38.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.813 22:01:20 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:38.813 22:01:20 -- common/autotest_common.sh@1205 -- # local i=0 00:09:38.813 22:01:20 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:38.814 22:01:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:38.814 22:01:20 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:38.814 22:01:20 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:38.814 22:01:20 -- common/autotest_common.sh@1217 -- # return 0 00:09:38.814 22:01:20 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:38.814 22:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:38.814 22:01:20 -- common/autotest_common.sh@10 -- # set +x 00:09:38.814 22:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:38.814 22:01:20 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:38.814 22:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:38.814 22:01:20 -- common/autotest_common.sh@10 -- # set +x 00:09:38.814 22:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:38.814 22:01:20 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:38.814 22:01:20 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:38.814 22:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:38.814 22:01:20 -- common/autotest_common.sh@10 -- # set +x 00:09:38.814 22:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:38.814 22:01:20 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:38.814 22:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:38.814 22:01:20 -- common/autotest_common.sh@10 -- # set +x 00:09:38.814 [2024-04-24 22:01:20.921538] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:38.814 22:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:38.814 22:01:20 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:38.814 22:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:38.814 22:01:20 -- common/autotest_common.sh@10 -- # set +x 00:09:38.814 22:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:38.814 22:01:20 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:38.814 22:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:38.814 22:01:20 -- common/autotest_common.sh@10 -- # set +x 00:09:38.814 22:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:38.814 22:01:20 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:39.380 22:01:21 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:39.380 22:01:21 -- common/autotest_common.sh@1184 -- # local i=0 00:09:39.380 22:01:21 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:39.380 22:01:21 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:39.380 22:01:21 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:41.946 22:01:23 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:41.946 22:01:23 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:41.946 22:01:23 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:41.946 22:01:23 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:41.946 22:01:23 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:41.946 22:01:23 -- common/autotest_common.sh@1194 -- # return 0 00:09:41.946 22:01:23 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:41.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.946 22:01:23 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:41.946 22:01:23 -- common/autotest_common.sh@1205 -- # local i=0 00:09:41.946 22:01:23 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:41.946 22:01:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:41.946 22:01:23 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:41.946 22:01:23 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:41.946 22:01:23 -- common/autotest_common.sh@1217 -- # return 0 00:09:41.946 22:01:23 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:41.946 22:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:41.946 22:01:23 -- common/autotest_common.sh@10 -- # set +x 00:09:41.946 22:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:41.946 22:01:23 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:41.946 22:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:41.946 22:01:23 -- common/autotest_common.sh@10 -- # set +x 00:09:41.946 22:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:41.946 22:01:23 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:41.946 22:01:23 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:41.946 22:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:41.946 22:01:23 -- common/autotest_common.sh@10 -- # set +x 00:09:41.946 22:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:41.946 22:01:23 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.946 22:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:41.946 22:01:23 -- common/autotest_common.sh@10 -- # set +x 00:09:41.946 [2024-04-24 22:01:23.740937] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.946 22:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:41.946 22:01:23 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:41.946 22:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:41.946 22:01:23 -- common/autotest_common.sh@10 -- # set +x 00:09:41.946 22:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:41.946 22:01:23 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:41.946 22:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:41.946 22:01:23 -- common/autotest_common.sh@10 -- # set +x 00:09:41.946 22:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:41.946 22:01:23 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:42.204 22:01:24 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:42.204 22:01:24 -- common/autotest_common.sh@1184 -- # local i=0 00:09:42.204 22:01:24 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:42.204 22:01:24 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:42.204 22:01:24 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:44.102 22:01:26 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:44.103 22:01:26 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:44.103 22:01:26 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:44.103 22:01:26 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:44.103 22:01:26 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:44.103 22:01:26 -- common/autotest_common.sh@1194 -- # return 0 00:09:44.103 22:01:26 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:44.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.361 22:01:26 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:44.361 22:01:26 -- common/autotest_common.sh@1205 -- # local i=0 00:09:44.361 22:01:26 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:44.361 22:01:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:44.361 22:01:26 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:44.361 22:01:26 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:44.361 22:01:26 -- common/autotest_common.sh@1217 -- # return 0 00:09:44.361 22:01:26 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:44.361 22:01:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.361 22:01:26 -- common/autotest_common.sh@10 -- # set +x 00:09:44.361 22:01:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.361 22:01:26 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.361 22:01:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.361 22:01:26 -- common/autotest_common.sh@10 -- # set +x 00:09:44.361 22:01:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.361 22:01:26 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:44.361 22:01:26 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:44.361 22:01:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.361 22:01:26 -- common/autotest_common.sh@10 -- # set +x 00:09:44.361 22:01:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.361 22:01:26 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:44.361 22:01:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.361 22:01:26 -- common/autotest_common.sh@10 -- # set +x 00:09:44.361 [2024-04-24 22:01:26.469051] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.361 22:01:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.361 22:01:26 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:44.361 22:01:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.361 22:01:26 -- common/autotest_common.sh@10 -- # set +x 00:09:44.361 22:01:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.361 22:01:26 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:44.361 22:01:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.361 22:01:26 -- common/autotest_common.sh@10 -- # set +x 00:09:44.361 22:01:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.361 22:01:26 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:44.927 22:01:27 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:44.927 22:01:27 -- common/autotest_common.sh@1184 -- # local i=0 00:09:44.927 22:01:27 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:44.927 22:01:27 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:44.927 22:01:27 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:46.865 22:01:29 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:46.865 22:01:29 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:46.865 22:01:29 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:46.865 22:01:29 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:46.865 22:01:29 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:46.865 22:01:29 -- common/autotest_common.sh@1194 -- # return 0 00:09:46.865 22:01:29 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:47.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.123 22:01:29 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:47.123 22:01:29 -- common/autotest_common.sh@1205 -- # local i=0 00:09:47.123 22:01:29 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:47.123 22:01:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:47.123 22:01:29 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:47.123 22:01:29 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:47.123 22:01:29 -- common/autotest_common.sh@1217 -- # return 0 00:09:47.123 22:01:29 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:47.123 22:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.123 22:01:29 -- common/autotest_common.sh@10 -- # set +x 00:09:47.123 22:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.123 22:01:29 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:47.123 22:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.123 22:01:29 -- common/autotest_common.sh@10 -- # set +x 00:09:47.123 22:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.123 22:01:29 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:47.123 22:01:29 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:47.123 22:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.123 22:01:29 -- common/autotest_common.sh@10 -- # set +x 00:09:47.123 22:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.123 22:01:29 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:47.123 22:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.123 22:01:29 -- common/autotest_common.sh@10 -- # set +x 00:09:47.123 [2024-04-24 22:01:29.249027] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:47.123 22:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.123 22:01:29 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:47.123 22:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.123 22:01:29 -- common/autotest_common.sh@10 -- # set +x 00:09:47.123 22:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.123 22:01:29 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:47.123 22:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:47.123 22:01:29 -- common/autotest_common.sh@10 -- # set +x 00:09:47.123 22:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:47.123 22:01:29 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:47.689 22:01:29 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:47.689 22:01:29 -- common/autotest_common.sh@1184 -- # local i=0 00:09:47.689 22:01:29 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:47.689 22:01:29 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:47.689 22:01:29 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:49.586 22:01:31 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:49.586 22:01:31 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:49.586 22:01:31 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:49.844 22:01:31 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:49.844 22:01:31 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:49.844 22:01:31 -- common/autotest_common.sh@1194 -- # return 0 00:09:49.844 22:01:31 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:49.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.844 22:01:31 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:49.844 22:01:31 -- common/autotest_common.sh@1205 -- # local i=0 00:09:49.844 22:01:31 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:49.844 22:01:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:49.844 22:01:31 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:49.844 22:01:31 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:49.844 22:01:31 -- common/autotest_common.sh@1217 -- # return 0 00:09:49.844 22:01:31 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:49.844 22:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:49.844 22:01:31 -- common/autotest_common.sh@10 -- # set +x 00:09:49.844 22:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:49.844 22:01:31 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:49.844 22:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:49.844 22:01:31 -- common/autotest_common.sh@10 -- # set +x 00:09:49.844 22:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:49.844 22:01:31 -- target/rpc.sh@99 -- # seq 1 5 00:09:49.844 22:01:31 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:49.844 22:01:31 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:49.844 22:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:49.844 22:01:31 -- common/autotest_common.sh@10 -- # set +x 00:09:49.844 22:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:49.844 22:01:32 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:49.844 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:49.844 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:49.844 [2024-04-24 22:01:32.005191] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:49.844 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:49.844 22:01:32 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:49.844 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:49.844 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:49.844 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:49.844 22:01:32 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:49.844 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:49.844 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:49.844 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:49.844 22:01:32 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.844 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:49.844 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:49.844 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:49.844 22:01:32 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:49.844 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:49.844 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:49.844 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:49.844 22:01:32 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:49.844 22:01:32 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:49.844 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:49.844 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:49.844 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:49.844 22:01:32 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:49.844 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:49.844 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:49.844 [2024-04-24 22:01:32.053250] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:49.844 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:49.844 22:01:32 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:49.844 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:49.844 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:49.844 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:49.844 22:01:32 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:49.844 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:49.844 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:49.844 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:49.844 22:01:32 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.844 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:49.844 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:49.844 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:49.844 22:01:32 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:49.844 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:49.844 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:49.844 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:49.844 22:01:32 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:49.844 22:01:32 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:49.844 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:49.844 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:49.844 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:49.844 22:01:32 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:49.844 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:49.844 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:50.102 [2024-04-24 22:01:32.101426] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:50.102 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.102 22:01:32 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:50.102 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:50.102 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:50.102 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.102 22:01:32 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:50.102 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:50.102 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:50.102 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.102 22:01:32 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.102 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:50.102 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:50.102 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.102 22:01:32 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:50.102 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:50.102 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:50.102 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.102 22:01:32 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:50.102 22:01:32 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:50.102 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:50.102 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:50.102 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.102 22:01:32 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:50.102 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:50.102 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:50.102 [2024-04-24 22:01:32.149619] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:50.102 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.102 22:01:32 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:50.102 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:50.102 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:50.102 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.102 22:01:32 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:50.102 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:50.102 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:50.102 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.102 22:01:32 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.102 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:50.102 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:50.102 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.102 22:01:32 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:50.102 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:50.102 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:50.102 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.102 22:01:32 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:50.102 22:01:32 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:50.102 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:50.102 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:50.102 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.102 22:01:32 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:50.102 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:50.102 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:50.102 [2024-04-24 22:01:32.197767] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:50.102 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.102 22:01:32 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:50.102 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:50.102 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:50.102 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.102 22:01:32 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:50.102 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:50.102 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:50.102 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.102 22:01:32 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.102 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:50.102 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:50.102 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.102 22:01:32 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:50.102 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:50.102 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:50.102 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.102 22:01:32 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:50.102 22:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:50.102 22:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:50.102 22:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.102 22:01:32 -- target/rpc.sh@110 -- # stats='{ 00:09:50.102 "tick_rate": 2700000000, 00:09:50.102 "poll_groups": [ 00:09:50.102 { 00:09:50.103 "name": "nvmf_tgt_poll_group_0", 00:09:50.103 "admin_qpairs": 2, 00:09:50.103 "io_qpairs": 84, 00:09:50.103 "current_admin_qpairs": 0, 00:09:50.103 "current_io_qpairs": 0, 00:09:50.103 "pending_bdev_io": 0, 00:09:50.103 "completed_nvme_io": 184, 00:09:50.103 "transports": [ 00:09:50.103 { 00:09:50.103 "trtype": "TCP" 00:09:50.103 } 00:09:50.103 ] 00:09:50.103 }, 00:09:50.103 { 00:09:50.103 "name": "nvmf_tgt_poll_group_1", 00:09:50.103 "admin_qpairs": 2, 00:09:50.103 "io_qpairs": 84, 00:09:50.103 "current_admin_qpairs": 0, 00:09:50.103 "current_io_qpairs": 0, 00:09:50.103 "pending_bdev_io": 0, 00:09:50.103 "completed_nvme_io": 134, 00:09:50.103 "transports": [ 00:09:50.103 { 00:09:50.103 "trtype": "TCP" 00:09:50.103 } 00:09:50.103 ] 00:09:50.103 }, 00:09:50.103 { 00:09:50.103 "name": "nvmf_tgt_poll_group_2", 00:09:50.103 "admin_qpairs": 1, 00:09:50.103 "io_qpairs": 84, 00:09:50.103 "current_admin_qpairs": 0, 00:09:50.103 "current_io_qpairs": 0, 00:09:50.103 "pending_bdev_io": 0, 00:09:50.103 "completed_nvme_io": 185, 00:09:50.103 "transports": [ 00:09:50.103 { 00:09:50.103 "trtype": "TCP" 00:09:50.103 } 00:09:50.103 ] 00:09:50.103 }, 00:09:50.103 { 00:09:50.103 "name": "nvmf_tgt_poll_group_3", 00:09:50.103 "admin_qpairs": 2, 00:09:50.103 "io_qpairs": 84, 00:09:50.103 "current_admin_qpairs": 0, 00:09:50.103 "current_io_qpairs": 0, 00:09:50.103 "pending_bdev_io": 0, 00:09:50.103 "completed_nvme_io": 183, 00:09:50.103 "transports": [ 00:09:50.103 { 00:09:50.103 "trtype": "TCP" 00:09:50.103 } 00:09:50.103 ] 00:09:50.103 } 00:09:50.103 ] 00:09:50.103 }' 00:09:50.103 22:01:32 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:50.103 22:01:32 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:50.103 22:01:32 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:50.103 22:01:32 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:50.103 22:01:32 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:50.103 22:01:32 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:50.103 22:01:32 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:50.103 22:01:32 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:50.103 22:01:32 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:50.361 22:01:32 -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:09:50.361 22:01:32 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:50.361 22:01:32 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:50.361 22:01:32 -- target/rpc.sh@123 -- # nvmftestfini 00:09:50.361 22:01:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:50.361 22:01:32 -- nvmf/common.sh@117 -- # sync 00:09:50.361 22:01:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:50.361 22:01:32 -- nvmf/common.sh@120 -- # set +e 00:09:50.361 22:01:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:50.361 22:01:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:50.361 rmmod nvme_tcp 00:09:50.361 rmmod nvme_fabrics 00:09:50.361 rmmod nvme_keyring 00:09:50.361 22:01:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:50.361 22:01:32 -- nvmf/common.sh@124 -- # set -e 00:09:50.361 22:01:32 -- nvmf/common.sh@125 -- # return 0 00:09:50.361 22:01:32 -- nvmf/common.sh@478 -- # '[' -n 3873304 ']' 00:09:50.361 22:01:32 -- nvmf/common.sh@479 -- # killprocess 3873304 00:09:50.361 22:01:32 -- common/autotest_common.sh@936 -- # '[' -z 3873304 ']' 00:09:50.361 22:01:32 -- common/autotest_common.sh@940 -- # kill -0 3873304 00:09:50.361 22:01:32 -- common/autotest_common.sh@941 -- # uname 00:09:50.361 22:01:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:50.361 22:01:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3873304 00:09:50.361 22:01:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:50.361 22:01:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:50.361 22:01:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3873304' 00:09:50.361 killing process with pid 3873304 00:09:50.361 22:01:32 -- common/autotest_common.sh@955 -- # kill 3873304 00:09:50.361 [2024-04-24 22:01:32.469262] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:50.361 22:01:32 -- common/autotest_common.sh@960 -- # wait 3873304 00:09:50.620 22:01:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:50.620 22:01:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:50.620 22:01:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:50.620 22:01:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:50.620 22:01:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:50.620 22:01:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.620 22:01:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:50.620 22:01:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.146 22:01:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:53.146 00:09:53.146 real 0m25.368s 00:09:53.146 user 1m21.790s 00:09:53.146 sys 0m3.898s 00:09:53.146 22:01:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:53.146 22:01:34 -- common/autotest_common.sh@10 -- # set +x 00:09:53.146 ************************************ 00:09:53.146 END TEST nvmf_rpc 00:09:53.146 ************************************ 00:09:53.146 22:01:34 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:53.146 22:01:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:53.146 22:01:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:53.146 22:01:34 -- common/autotest_common.sh@10 -- # set +x 00:09:53.146 ************************************ 00:09:53.146 START TEST nvmf_invalid 00:09:53.146 ************************************ 00:09:53.146 22:01:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:53.146 * Looking for test storage... 00:09:53.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.146 22:01:35 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:53.146 22:01:35 -- nvmf/common.sh@7 -- # uname -s 00:09:53.146 22:01:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.146 22:01:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.146 22:01:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.146 22:01:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.146 22:01:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.146 22:01:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.146 22:01:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.146 22:01:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.146 22:01:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.146 22:01:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.146 22:01:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:53.146 22:01:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:53.146 22:01:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.146 22:01:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.146 22:01:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:53.146 22:01:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.146 22:01:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:53.146 22:01:35 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.146 22:01:35 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.146 22:01:35 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.147 22:01:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.147 22:01:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.147 22:01:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.147 22:01:35 -- paths/export.sh@5 -- # export PATH 00:09:53.147 22:01:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.147 22:01:35 -- nvmf/common.sh@47 -- # : 0 00:09:53.147 22:01:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:53.147 22:01:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:53.147 22:01:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.147 22:01:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.147 22:01:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.147 22:01:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:53.147 22:01:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:53.147 22:01:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:53.147 22:01:35 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:53.147 22:01:35 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:53.147 22:01:35 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:53.147 22:01:35 -- target/invalid.sh@14 -- # target=foobar 00:09:53.147 22:01:35 -- target/invalid.sh@16 -- # RANDOM=0 00:09:53.147 22:01:35 -- target/invalid.sh@34 -- # nvmftestinit 00:09:53.147 22:01:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:53.147 22:01:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.147 22:01:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:53.147 22:01:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:53.147 22:01:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:53.147 22:01:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.147 22:01:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:53.147 22:01:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.147 22:01:35 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:53.147 22:01:35 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:53.147 22:01:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:53.147 22:01:35 -- common/autotest_common.sh@10 -- # set +x 00:09:55.677 22:01:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:55.677 22:01:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:55.677 22:01:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:55.677 22:01:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:55.677 22:01:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:55.677 22:01:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:55.677 22:01:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:55.677 22:01:37 -- nvmf/common.sh@295 -- # net_devs=() 00:09:55.677 22:01:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:55.677 22:01:37 -- nvmf/common.sh@296 -- # e810=() 00:09:55.677 22:01:37 -- nvmf/common.sh@296 -- # local -ga e810 00:09:55.677 22:01:37 -- nvmf/common.sh@297 -- # x722=() 00:09:55.677 22:01:37 -- nvmf/common.sh@297 -- # local -ga x722 00:09:55.677 22:01:37 -- nvmf/common.sh@298 -- # mlx=() 00:09:55.677 22:01:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:55.677 22:01:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:55.677 22:01:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:55.677 22:01:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:55.677 22:01:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:55.677 22:01:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:55.677 22:01:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:55.677 22:01:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:55.677 22:01:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:55.677 22:01:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:55.677 22:01:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:55.677 22:01:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:55.677 22:01:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:55.677 22:01:37 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:55.677 22:01:37 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:55.677 22:01:37 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:55.677 22:01:37 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:55.677 22:01:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:55.677 22:01:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:55.677 22:01:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:55.677 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:55.677 22:01:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:55.677 22:01:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:55.677 22:01:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.677 22:01:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.677 22:01:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:55.677 22:01:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:55.677 22:01:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:55.677 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:55.677 22:01:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:55.677 22:01:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:55.677 22:01:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.677 22:01:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.677 22:01:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:55.677 22:01:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:55.677 22:01:37 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:55.677 22:01:37 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:55.677 22:01:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:55.677 22:01:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.677 22:01:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:55.677 22:01:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.677 22:01:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:55.677 Found net devices under 0000:84:00.0: cvl_0_0 00:09:55.677 22:01:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.677 22:01:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:55.677 22:01:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.677 22:01:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:55.677 22:01:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.677 22:01:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:55.677 Found net devices under 0000:84:00.1: cvl_0_1 00:09:55.677 22:01:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.677 22:01:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:55.677 22:01:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:55.677 22:01:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:55.677 22:01:37 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:55.677 22:01:37 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:55.677 22:01:37 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:55.677 22:01:37 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:55.677 22:01:37 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:55.677 22:01:37 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:55.677 22:01:37 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:55.677 22:01:37 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:55.677 22:01:37 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:55.677 22:01:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:55.677 22:01:37 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:55.677 22:01:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:55.677 22:01:37 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:55.677 22:01:37 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:55.678 22:01:37 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:55.678 22:01:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:55.678 22:01:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:55.678 22:01:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:55.678 22:01:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:55.678 22:01:37 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:55.678 22:01:37 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:55.678 22:01:37 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:55.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:55.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:09:55.678 00:09:55.678 --- 10.0.0.2 ping statistics --- 00:09:55.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.678 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:09:55.678 22:01:37 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:55.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:55.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:09:55.678 00:09:55.678 --- 10.0.0.1 ping statistics --- 00:09:55.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.678 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:09:55.678 22:01:37 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:55.678 22:01:37 -- nvmf/common.sh@411 -- # return 0 00:09:55.678 22:01:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:55.678 22:01:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:55.678 22:01:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:55.678 22:01:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:55.678 22:01:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:55.678 22:01:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:55.678 22:01:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:55.678 22:01:37 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:55.678 22:01:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:55.678 22:01:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:55.678 22:01:37 -- common/autotest_common.sh@10 -- # set +x 00:09:55.678 22:01:37 -- nvmf/common.sh@470 -- # nvmfpid=3877888 00:09:55.678 22:01:37 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:55.678 22:01:37 -- nvmf/common.sh@471 -- # waitforlisten 3877888 00:09:55.678 22:01:37 -- common/autotest_common.sh@817 -- # '[' -z 3877888 ']' 00:09:55.678 22:01:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.678 22:01:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:55.678 22:01:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.678 22:01:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:55.678 22:01:37 -- common/autotest_common.sh@10 -- # set +x 00:09:55.678 [2024-04-24 22:01:37.594722] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:09:55.678 [2024-04-24 22:01:37.594817] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.678 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.678 [2024-04-24 22:01:37.670761] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:55.678 [2024-04-24 22:01:37.794701] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:55.678 [2024-04-24 22:01:37.794769] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:55.678 [2024-04-24 22:01:37.794785] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:55.678 [2024-04-24 22:01:37.794798] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:55.678 [2024-04-24 22:01:37.794810] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:55.678 [2024-04-24 22:01:37.794906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.678 [2024-04-24 22:01:37.794982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:55.678 [2024-04-24 22:01:37.795035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:55.678 [2024-04-24 22:01:37.795038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.678 22:01:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:55.678 22:01:37 -- common/autotest_common.sh@850 -- # return 0 00:09:55.678 22:01:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:55.678 22:01:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:55.678 22:01:37 -- common/autotest_common.sh@10 -- # set +x 00:09:55.936 22:01:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:55.936 22:01:37 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:55.936 22:01:37 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode32739 00:09:56.193 [2024-04-24 22:01:38.222105] nvmf_rpc.c: 402:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:56.193 22:01:38 -- target/invalid.sh@40 -- # out='request: 00:09:56.193 { 00:09:56.193 "nqn": "nqn.2016-06.io.spdk:cnode32739", 00:09:56.193 "tgt_name": "foobar", 00:09:56.193 "method": "nvmf_create_subsystem", 00:09:56.193 "req_id": 1 00:09:56.193 } 00:09:56.193 Got JSON-RPC error response 00:09:56.193 response: 00:09:56.193 { 00:09:56.193 "code": -32603, 00:09:56.193 "message": "Unable to find target foobar" 00:09:56.193 }' 00:09:56.193 22:01:38 -- target/invalid.sh@41 -- # [[ request: 00:09:56.193 { 00:09:56.193 "nqn": "nqn.2016-06.io.spdk:cnode32739", 00:09:56.193 "tgt_name": "foobar", 00:09:56.193 "method": "nvmf_create_subsystem", 00:09:56.193 "req_id": 1 00:09:56.193 } 00:09:56.193 Got JSON-RPC error response 00:09:56.193 response: 00:09:56.193 { 00:09:56.193 "code": -32603, 00:09:56.193 "message": "Unable to find target foobar" 00:09:56.193 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:56.193 22:01:38 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:56.193 22:01:38 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode26488 00:09:56.452 [2024-04-24 22:01:38.691751] nvmf_rpc.c: 419:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26488: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:56.709 22:01:38 -- target/invalid.sh@45 -- # out='request: 00:09:56.709 { 00:09:56.709 "nqn": "nqn.2016-06.io.spdk:cnode26488", 00:09:56.709 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:56.709 "method": "nvmf_create_subsystem", 00:09:56.709 "req_id": 1 00:09:56.709 } 00:09:56.709 Got JSON-RPC error response 00:09:56.709 response: 00:09:56.709 { 00:09:56.709 "code": -32602, 00:09:56.709 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:56.709 }' 00:09:56.709 22:01:38 -- target/invalid.sh@46 -- # [[ request: 00:09:56.709 { 00:09:56.709 "nqn": "nqn.2016-06.io.spdk:cnode26488", 00:09:56.709 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:56.709 "method": "nvmf_create_subsystem", 00:09:56.709 "req_id": 1 00:09:56.709 } 00:09:56.710 Got JSON-RPC error response 00:09:56.710 response: 00:09:56.710 { 00:09:56.710 "code": -32602, 00:09:56.710 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:56.710 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:56.710 22:01:38 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:56.710 22:01:38 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode8953 00:09:57.276 [2024-04-24 22:01:39.273734] nvmf_rpc.c: 428:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8953: invalid model number 'SPDK_Controller' 00:09:57.276 22:01:39 -- target/invalid.sh@50 -- # out='request: 00:09:57.276 { 00:09:57.276 "nqn": "nqn.2016-06.io.spdk:cnode8953", 00:09:57.276 "model_number": "SPDK_Controller\u001f", 00:09:57.276 "method": "nvmf_create_subsystem", 00:09:57.276 "req_id": 1 00:09:57.276 } 00:09:57.276 Got JSON-RPC error response 00:09:57.276 response: 00:09:57.276 { 00:09:57.276 "code": -32602, 00:09:57.276 "message": "Invalid MN SPDK_Controller\u001f" 00:09:57.276 }' 00:09:57.276 22:01:39 -- target/invalid.sh@51 -- # [[ request: 00:09:57.276 { 00:09:57.277 "nqn": "nqn.2016-06.io.spdk:cnode8953", 00:09:57.277 "model_number": "SPDK_Controller\u001f", 00:09:57.277 "method": "nvmf_create_subsystem", 00:09:57.277 "req_id": 1 00:09:57.277 } 00:09:57.277 Got JSON-RPC error response 00:09:57.277 response: 00:09:57.277 { 00:09:57.277 "code": -32602, 00:09:57.277 "message": "Invalid MN SPDK_Controller\u001f" 00:09:57.277 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:57.277 22:01:39 -- target/invalid.sh@54 -- # gen_random_s 21 00:09:57.277 22:01:39 -- target/invalid.sh@19 -- # local length=21 ll 00:09:57.277 22:01:39 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:57.277 22:01:39 -- target/invalid.sh@21 -- # local chars 00:09:57.277 22:01:39 -- target/invalid.sh@22 -- # local string 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # printf %x 92 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # string+='\' 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # printf %x 115 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x73' 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # string+=s 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # printf %x 51 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # string+=3 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # printf %x 42 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # string+='*' 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # printf %x 89 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # string+=Y 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # printf %x 92 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # string+='\' 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # printf %x 33 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # string+='!' 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # printf %x 74 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # string+=J 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # printf %x 61 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # string+== 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # printf %x 40 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # string+='(' 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # printf %x 57 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # string+=9 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # printf %x 77 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # string+=M 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # printf %x 48 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # string+=0 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # printf %x 61 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # string+== 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # printf %x 67 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # string+=C 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # printf %x 46 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # string+=. 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # printf %x 78 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # string+=N 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # printf %x 111 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # string+=o 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # printf %x 80 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # string+=P 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # printf %x 124 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # string+='|' 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # printf %x 74 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:57.277 22:01:39 -- target/invalid.sh@25 -- # string+=J 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.277 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.277 22:01:39 -- target/invalid.sh@28 -- # [[ \ == \- ]] 00:09:57.277 22:01:39 -- target/invalid.sh@31 -- # echo '\s3*Y\!J=(9M0=C.NoP|J' 00:09:57.277 22:01:39 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '\s3*Y\!J=(9M0=C.NoP|J' nqn.2016-06.io.spdk:cnode818 00:09:57.535 [2024-04-24 22:01:39.699121] nvmf_rpc.c: 419:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode818: invalid serial number '\s3*Y\!J=(9M0=C.NoP|J' 00:09:57.535 22:01:39 -- target/invalid.sh@54 -- # out='request: 00:09:57.535 { 00:09:57.535 "nqn": "nqn.2016-06.io.spdk:cnode818", 00:09:57.535 "serial_number": "\\s3*Y\\!J=(9M0=C.NoP|J", 00:09:57.535 "method": "nvmf_create_subsystem", 00:09:57.535 "req_id": 1 00:09:57.535 } 00:09:57.535 Got JSON-RPC error response 00:09:57.535 response: 00:09:57.535 { 00:09:57.535 "code": -32602, 00:09:57.535 "message": "Invalid SN \\s3*Y\\!J=(9M0=C.NoP|J" 00:09:57.535 }' 00:09:57.535 22:01:39 -- target/invalid.sh@55 -- # [[ request: 00:09:57.535 { 00:09:57.535 "nqn": "nqn.2016-06.io.spdk:cnode818", 00:09:57.535 "serial_number": "\\s3*Y\\!J=(9M0=C.NoP|J", 00:09:57.535 "method": "nvmf_create_subsystem", 00:09:57.535 "req_id": 1 00:09:57.535 } 00:09:57.535 Got JSON-RPC error response 00:09:57.535 response: 00:09:57.535 { 00:09:57.535 "code": -32602, 00:09:57.535 "message": "Invalid SN \\s3*Y\\!J=(9M0=C.NoP|J" 00:09:57.535 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:57.535 22:01:39 -- target/invalid.sh@58 -- # gen_random_s 41 00:09:57.535 22:01:39 -- target/invalid.sh@19 -- # local length=41 ll 00:09:57.536 22:01:39 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:57.536 22:01:39 -- target/invalid.sh@21 -- # local chars 00:09:57.536 22:01:39 -- target/invalid.sh@22 -- # local string 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # printf %x 65 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x41' 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # string+=A 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # printf %x 108 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # string+=l 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # printf %x 71 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # string+=G 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # printf %x 124 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # string+='|' 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # printf %x 102 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x66' 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # string+=f 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # printf %x 62 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # string+='>' 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # printf %x 50 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # string+=2 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # printf %x 37 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # string+=% 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # printf %x 43 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # string+=+ 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # printf %x 49 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x31' 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # string+=1 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # printf %x 51 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # string+=3 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # printf %x 48 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # string+=0 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # printf %x 90 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # string+=Z 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # printf %x 100 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # string+=d 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # printf %x 32 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # string+=' ' 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # printf %x 105 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # string+=i 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # printf %x 34 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # string+='"' 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.536 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.536 22:01:39 -- target/invalid.sh@25 -- # printf %x 60 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # string+='<' 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # printf %x 118 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # string+=v 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # printf %x 124 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # string+='|' 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # printf %x 42 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # string+='*' 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # printf %x 59 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # string+=';' 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # printf %x 86 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # string+=V 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # printf %x 33 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # string+='!' 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # printf %x 123 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # string+='{' 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # printf %x 91 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # string+='[' 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # printf %x 104 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x68' 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # string+=h 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # printf %x 61 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # string+== 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # printf %x 51 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # string+=3 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # printf %x 32 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # string+=' ' 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # printf %x 108 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # string+=l 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # printf %x 120 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # string+=x 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # printf %x 105 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # string+=i 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # printf %x 80 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # string+=P 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # printf %x 123 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:57.795 22:01:39 -- target/invalid.sh@25 -- # string+='{' 00:09:57.795 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.796 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.796 22:01:39 -- target/invalid.sh@25 -- # printf %x 86 00:09:57.796 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:57.796 22:01:39 -- target/invalid.sh@25 -- # string+=V 00:09:57.796 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.796 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.796 22:01:39 -- target/invalid.sh@25 -- # printf %x 59 00:09:57.796 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:57.796 22:01:39 -- target/invalid.sh@25 -- # string+=';' 00:09:57.796 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.796 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.796 22:01:39 -- target/invalid.sh@25 -- # printf %x 119 00:09:57.796 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:57.796 22:01:39 -- target/invalid.sh@25 -- # string+=w 00:09:57.796 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.796 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.796 22:01:39 -- target/invalid.sh@25 -- # printf %x 74 00:09:57.796 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:57.796 22:01:39 -- target/invalid.sh@25 -- # string+=J 00:09:57.796 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.796 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.796 22:01:39 -- target/invalid.sh@25 -- # printf %x 49 00:09:57.796 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x31' 00:09:57.796 22:01:39 -- target/invalid.sh@25 -- # string+=1 00:09:57.796 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.796 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.796 22:01:39 -- target/invalid.sh@25 -- # printf %x 43 00:09:57.796 22:01:39 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:09:57.796 22:01:39 -- target/invalid.sh@25 -- # string+=+ 00:09:57.796 22:01:39 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.796 22:01:39 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.796 22:01:39 -- target/invalid.sh@28 -- # [[ A == \- ]] 00:09:57.796 22:01:39 -- target/invalid.sh@31 -- # echo 'AlG|f>2%+130Zd i"2%+130Zd i"2%+130Zd i"2%+130Zd i\"2%+130Zd i\" /dev/null' 00:10:01.587 22:01:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.491 22:01:45 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:03.491 00:10:03.491 real 0m10.745s 00:10:03.491 user 0m28.572s 00:10:03.491 sys 0m2.925s 00:10:03.491 22:01:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:03.491 22:01:45 -- common/autotest_common.sh@10 -- # set +x 00:10:03.491 ************************************ 00:10:03.491 END TEST nvmf_invalid 00:10:03.491 ************************************ 00:10:03.750 22:01:45 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:03.750 22:01:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:03.750 22:01:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:03.750 22:01:45 -- common/autotest_common.sh@10 -- # set +x 00:10:03.750 ************************************ 00:10:03.750 START TEST nvmf_abort 00:10:03.750 ************************************ 00:10:03.750 22:01:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:03.750 * Looking for test storage... 00:10:03.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:03.750 22:01:45 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:03.750 22:01:45 -- nvmf/common.sh@7 -- # uname -s 00:10:03.750 22:01:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:03.750 22:01:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:03.750 22:01:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:03.750 22:01:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:03.750 22:01:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:03.750 22:01:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:03.750 22:01:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:03.750 22:01:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:03.750 22:01:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:03.750 22:01:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:03.750 22:01:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:03.750 22:01:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:03.750 22:01:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:03.750 22:01:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:03.750 22:01:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:03.750 22:01:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:03.750 22:01:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:03.750 22:01:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.750 22:01:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.750 22:01:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.750 22:01:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.750 22:01:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.750 22:01:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.750 22:01:45 -- paths/export.sh@5 -- # export PATH 00:10:03.750 22:01:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.750 22:01:45 -- nvmf/common.sh@47 -- # : 0 00:10:03.750 22:01:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:03.750 22:01:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:03.750 22:01:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:03.750 22:01:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:03.750 22:01:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:03.750 22:01:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:03.750 22:01:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:03.750 22:01:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:03.750 22:01:45 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:03.750 22:01:45 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:03.750 22:01:45 -- target/abort.sh@14 -- # nvmftestinit 00:10:03.750 22:01:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:03.750 22:01:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:03.750 22:01:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:03.750 22:01:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:03.750 22:01:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:03.750 22:01:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.750 22:01:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:03.750 22:01:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.750 22:01:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:03.750 22:01:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:03.750 22:01:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:03.750 22:01:45 -- common/autotest_common.sh@10 -- # set +x 00:10:06.280 22:01:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:06.280 22:01:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:06.280 22:01:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:06.280 22:01:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:06.280 22:01:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:06.280 22:01:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:06.280 22:01:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:06.280 22:01:48 -- nvmf/common.sh@295 -- # net_devs=() 00:10:06.280 22:01:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:06.280 22:01:48 -- nvmf/common.sh@296 -- # e810=() 00:10:06.280 22:01:48 -- nvmf/common.sh@296 -- # local -ga e810 00:10:06.280 22:01:48 -- nvmf/common.sh@297 -- # x722=() 00:10:06.280 22:01:48 -- nvmf/common.sh@297 -- # local -ga x722 00:10:06.280 22:01:48 -- nvmf/common.sh@298 -- # mlx=() 00:10:06.280 22:01:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:06.280 22:01:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.280 22:01:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.280 22:01:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.280 22:01:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.280 22:01:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.280 22:01:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.280 22:01:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.280 22:01:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.280 22:01:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.280 22:01:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.280 22:01:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.280 22:01:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:06.280 22:01:48 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:06.280 22:01:48 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:06.280 22:01:48 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:06.280 22:01:48 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:06.280 22:01:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:06.280 22:01:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.280 22:01:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:06.280 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:06.280 22:01:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.280 22:01:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.280 22:01:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.280 22:01:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.280 22:01:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.280 22:01:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.280 22:01:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:06.280 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:06.280 22:01:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.280 22:01:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.280 22:01:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.280 22:01:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.280 22:01:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.280 22:01:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:06.280 22:01:48 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:06.280 22:01:48 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:06.280 22:01:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.280 22:01:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.280 22:01:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:06.280 22:01:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.280 22:01:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:06.280 Found net devices under 0000:84:00.0: cvl_0_0 00:10:06.280 22:01:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.280 22:01:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.280 22:01:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.280 22:01:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:06.280 22:01:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.280 22:01:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:06.280 Found net devices under 0000:84:00.1: cvl_0_1 00:10:06.280 22:01:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.280 22:01:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:06.280 22:01:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:06.280 22:01:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:06.280 22:01:48 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:06.280 22:01:48 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:06.280 22:01:48 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.280 22:01:48 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.280 22:01:48 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.280 22:01:48 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:06.280 22:01:48 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.280 22:01:48 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.280 22:01:48 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:06.280 22:01:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.280 22:01:48 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.280 22:01:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:06.280 22:01:48 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:06.280 22:01:48 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.280 22:01:48 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.280 22:01:48 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.280 22:01:48 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.280 22:01:48 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:06.280 22:01:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.280 22:01:48 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.280 22:01:48 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.280 22:01:48 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:06.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:10:06.280 00:10:06.280 --- 10.0.0.2 ping statistics --- 00:10:06.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.280 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:10:06.280 22:01:48 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:10:06.280 00:10:06.280 --- 10.0.0.1 ping statistics --- 00:10:06.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.280 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:10:06.280 22:01:48 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.280 22:01:48 -- nvmf/common.sh@411 -- # return 0 00:10:06.280 22:01:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:06.280 22:01:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.280 22:01:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:06.280 22:01:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:06.280 22:01:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.280 22:01:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:06.280 22:01:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:06.280 22:01:48 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:06.280 22:01:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:06.280 22:01:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:06.280 22:01:48 -- common/autotest_common.sh@10 -- # set +x 00:10:06.280 22:01:48 -- nvmf/common.sh@470 -- # nvmfpid=3880702 00:10:06.280 22:01:48 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:06.280 22:01:48 -- nvmf/common.sh@471 -- # waitforlisten 3880702 00:10:06.280 22:01:48 -- common/autotest_common.sh@817 -- # '[' -z 3880702 ']' 00:10:06.280 22:01:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.280 22:01:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:06.280 22:01:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.280 22:01:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:06.280 22:01:48 -- common/autotest_common.sh@10 -- # set +x 00:10:06.280 [2024-04-24 22:01:48.351485] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:10:06.280 [2024-04-24 22:01:48.351567] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.280 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.280 [2024-04-24 22:01:48.429299] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:06.538 [2024-04-24 22:01:48.553323] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.538 [2024-04-24 22:01:48.553384] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.538 [2024-04-24 22:01:48.553411] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.538 [2024-04-24 22:01:48.553427] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.538 [2024-04-24 22:01:48.553451] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.538 [2024-04-24 22:01:48.553512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.538 [2024-04-24 22:01:48.553568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:06.538 [2024-04-24 22:01:48.553571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.538 22:01:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:06.538 22:01:48 -- common/autotest_common.sh@850 -- # return 0 00:10:06.538 22:01:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:06.538 22:01:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:06.538 22:01:48 -- common/autotest_common.sh@10 -- # set +x 00:10:06.538 22:01:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:06.538 22:01:48 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:06.538 22:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:06.538 22:01:48 -- common/autotest_common.sh@10 -- # set +x 00:10:06.538 [2024-04-24 22:01:48.704177] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:06.538 22:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:06.538 22:01:48 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:06.538 22:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:06.538 22:01:48 -- common/autotest_common.sh@10 -- # set +x 00:10:06.538 Malloc0 00:10:06.538 22:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:06.538 22:01:48 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:06.538 22:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:06.538 22:01:48 -- common/autotest_common.sh@10 -- # set +x 00:10:06.538 Delay0 00:10:06.538 22:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:06.538 22:01:48 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:06.538 22:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:06.538 22:01:48 -- common/autotest_common.sh@10 -- # set +x 00:10:06.538 22:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:06.538 22:01:48 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:06.538 22:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:06.538 22:01:48 -- common/autotest_common.sh@10 -- # set +x 00:10:06.538 22:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:06.538 22:01:48 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:06.538 22:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:06.538 22:01:48 -- common/autotest_common.sh@10 -- # set +x 00:10:06.538 [2024-04-24 22:01:48.769503] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:06.538 [2024-04-24 22:01:48.769850] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:06.538 22:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:06.538 22:01:48 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:06.538 22:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:06.538 22:01:48 -- common/autotest_common.sh@10 -- # set +x 00:10:06.538 22:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:06.538 22:01:48 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:06.796 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.796 [2024-04-24 22:01:48.875629] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:09.323 Initializing NVMe Controllers 00:10:09.323 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:09.323 controller IO queue size 128 less than required 00:10:09.323 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:09.324 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:09.324 Initialization complete. Launching workers. 00:10:09.324 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27509 00:10:09.324 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27570, failed to submit 62 00:10:09.324 success 27513, unsuccess 57, failed 0 00:10:09.324 22:01:50 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:09.324 22:01:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:09.324 22:01:50 -- common/autotest_common.sh@10 -- # set +x 00:10:09.324 22:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:09.324 22:01:51 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:09.324 22:01:51 -- target/abort.sh@38 -- # nvmftestfini 00:10:09.324 22:01:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:09.324 22:01:51 -- nvmf/common.sh@117 -- # sync 00:10:09.324 22:01:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:09.324 22:01:51 -- nvmf/common.sh@120 -- # set +e 00:10:09.324 22:01:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:09.324 22:01:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:09.324 rmmod nvme_tcp 00:10:09.324 rmmod nvme_fabrics 00:10:09.324 rmmod nvme_keyring 00:10:09.324 22:01:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:09.324 22:01:51 -- nvmf/common.sh@124 -- # set -e 00:10:09.324 22:01:51 -- nvmf/common.sh@125 -- # return 0 00:10:09.324 22:01:51 -- nvmf/common.sh@478 -- # '[' -n 3880702 ']' 00:10:09.324 22:01:51 -- nvmf/common.sh@479 -- # killprocess 3880702 00:10:09.324 22:01:51 -- common/autotest_common.sh@936 -- # '[' -z 3880702 ']' 00:10:09.324 22:01:51 -- common/autotest_common.sh@940 -- # kill -0 3880702 00:10:09.324 22:01:51 -- common/autotest_common.sh@941 -- # uname 00:10:09.324 22:01:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:09.324 22:01:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3880702 00:10:09.324 22:01:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:09.324 22:01:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:09.324 22:01:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3880702' 00:10:09.324 killing process with pid 3880702 00:10:09.324 22:01:51 -- common/autotest_common.sh@955 -- # kill 3880702 00:10:09.324 [2024-04-24 22:01:51.085566] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:09.324 22:01:51 -- common/autotest_common.sh@960 -- # wait 3880702 00:10:09.324 22:01:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:09.324 22:01:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:09.324 22:01:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:09.324 22:01:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:09.324 22:01:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:09.324 22:01:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.324 22:01:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:09.324 22:01:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.235 22:01:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:11.235 00:10:11.235 real 0m7.553s 00:10:11.235 user 0m10.687s 00:10:11.235 sys 0m2.794s 00:10:11.235 22:01:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:11.235 22:01:53 -- common/autotest_common.sh@10 -- # set +x 00:10:11.235 ************************************ 00:10:11.235 END TEST nvmf_abort 00:10:11.235 ************************************ 00:10:11.235 22:01:53 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:11.235 22:01:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:11.235 22:01:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:11.235 22:01:53 -- common/autotest_common.sh@10 -- # set +x 00:10:11.494 ************************************ 00:10:11.494 START TEST nvmf_ns_hotplug_stress 00:10:11.494 ************************************ 00:10:11.494 22:01:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:11.494 * Looking for test storage... 00:10:11.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.494 22:01:53 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.494 22:01:53 -- nvmf/common.sh@7 -- # uname -s 00:10:11.494 22:01:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.494 22:01:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.494 22:01:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.494 22:01:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.494 22:01:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.494 22:01:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.494 22:01:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.494 22:01:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.494 22:01:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.494 22:01:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.494 22:01:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:11.494 22:01:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:11.494 22:01:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.494 22:01:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.494 22:01:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.494 22:01:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.494 22:01:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.494 22:01:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.494 22:01:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.494 22:01:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.494 22:01:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.494 22:01:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.494 22:01:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.494 22:01:53 -- paths/export.sh@5 -- # export PATH 00:10:11.494 22:01:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.494 22:01:53 -- nvmf/common.sh@47 -- # : 0 00:10:11.494 22:01:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:11.494 22:01:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:11.494 22:01:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.494 22:01:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.494 22:01:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.494 22:01:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:11.494 22:01:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:11.494 22:01:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:11.494 22:01:53 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:11.494 22:01:53 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:10:11.494 22:01:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:11.494 22:01:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.494 22:01:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:11.494 22:01:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:11.494 22:01:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:11.494 22:01:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.494 22:01:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:11.494 22:01:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.494 22:01:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:11.494 22:01:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:11.494 22:01:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:11.494 22:01:53 -- common/autotest_common.sh@10 -- # set +x 00:10:14.025 22:01:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:14.025 22:01:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:14.025 22:01:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:14.025 22:01:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:14.025 22:01:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:14.025 22:01:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:14.025 22:01:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:14.025 22:01:55 -- nvmf/common.sh@295 -- # net_devs=() 00:10:14.025 22:01:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:14.025 22:01:55 -- nvmf/common.sh@296 -- # e810=() 00:10:14.025 22:01:55 -- nvmf/common.sh@296 -- # local -ga e810 00:10:14.025 22:01:55 -- nvmf/common.sh@297 -- # x722=() 00:10:14.025 22:01:55 -- nvmf/common.sh@297 -- # local -ga x722 00:10:14.025 22:01:55 -- nvmf/common.sh@298 -- # mlx=() 00:10:14.025 22:01:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:14.025 22:01:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:14.025 22:01:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:14.025 22:01:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:14.025 22:01:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:14.025 22:01:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:14.025 22:01:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:14.025 22:01:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:14.025 22:01:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:14.025 22:01:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:14.025 22:01:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:14.025 22:01:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:14.025 22:01:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:14.025 22:01:55 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:14.025 22:01:55 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:14.025 22:01:55 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:14.025 22:01:55 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:14.025 22:01:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:14.025 22:01:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:14.025 22:01:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:14.025 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:14.025 22:01:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:14.025 22:01:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:14.025 22:01:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.025 22:01:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.025 22:01:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:14.025 22:01:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:14.025 22:01:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:14.025 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:14.025 22:01:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:14.025 22:01:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:14.025 22:01:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.025 22:01:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.025 22:01:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:14.025 22:01:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:14.025 22:01:55 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:14.025 22:01:55 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:14.025 22:01:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:14.025 22:01:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.025 22:01:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:14.025 22:01:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.025 22:01:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:14.025 Found net devices under 0000:84:00.0: cvl_0_0 00:10:14.025 22:01:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.025 22:01:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:14.025 22:01:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.025 22:01:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:14.025 22:01:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.025 22:01:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:14.025 Found net devices under 0000:84:00.1: cvl_0_1 00:10:14.025 22:01:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.025 22:01:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:14.025 22:01:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:14.025 22:01:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:14.025 22:01:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:14.025 22:01:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:14.025 22:01:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.025 22:01:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.025 22:01:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:14.025 22:01:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:14.025 22:01:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:14.025 22:01:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:14.025 22:01:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:14.025 22:01:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:14.025 22:01:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.025 22:01:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:14.025 22:01:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:14.025 22:01:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:14.025 22:01:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:14.025 22:01:56 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:14.025 22:01:56 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:14.025 22:01:56 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:14.025 22:01:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:14.025 22:01:56 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:14.025 22:01:56 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:14.025 22:01:56 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:14.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:10:14.025 00:10:14.025 --- 10.0.0.2 ping statistics --- 00:10:14.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.025 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:10:14.025 22:01:56 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:14.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:10:14.025 00:10:14.025 --- 10.0.0.1 ping statistics --- 00:10:14.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.025 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:10:14.025 22:01:56 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.025 22:01:56 -- nvmf/common.sh@411 -- # return 0 00:10:14.025 22:01:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:14.025 22:01:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.025 22:01:56 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:14.025 22:01:56 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:14.025 22:01:56 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.025 22:01:56 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:14.025 22:01:56 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:14.025 22:01:56 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:10:14.025 22:01:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:14.025 22:01:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:14.025 22:01:56 -- common/autotest_common.sh@10 -- # set +x 00:10:14.025 22:01:56 -- nvmf/common.sh@470 -- # nvmfpid=3883071 00:10:14.025 22:01:56 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:14.025 22:01:56 -- nvmf/common.sh@471 -- # waitforlisten 3883071 00:10:14.025 22:01:56 -- common/autotest_common.sh@817 -- # '[' -z 3883071 ']' 00:10:14.025 22:01:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.025 22:01:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:14.025 22:01:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.025 22:01:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:14.025 22:01:56 -- common/autotest_common.sh@10 -- # set +x 00:10:14.025 [2024-04-24 22:01:56.173708] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:10:14.026 [2024-04-24 22:01:56.173802] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.026 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.026 [2024-04-24 22:01:56.258276] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:14.285 [2024-04-24 22:01:56.394354] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.285 [2024-04-24 22:01:56.394431] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.285 [2024-04-24 22:01:56.394449] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.285 [2024-04-24 22:01:56.394481] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.285 [2024-04-24 22:01:56.394495] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.285 [2024-04-24 22:01:56.394611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.285 [2024-04-24 22:01:56.394668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:14.285 [2024-04-24 22:01:56.394672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.285 22:01:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:14.285 22:01:56 -- common/autotest_common.sh@850 -- # return 0 00:10:14.285 22:01:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:14.285 22:01:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:14.285 22:01:56 -- common/autotest_common.sh@10 -- # set +x 00:10:14.542 22:01:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.542 22:01:56 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:10:14.542 22:01:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:14.800 [2024-04-24 22:01:56.821790] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:14.800 22:01:56 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:15.059 22:01:57 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:15.625 [2024-04-24 22:01:57.706358] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:15.625 [2024-04-24 22:01:57.706713] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:15.625 22:01:57 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:16.191 22:01:58 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:16.449 Malloc0 00:10:16.449 22:01:58 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:17.015 Delay0 00:10:17.015 22:01:59 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.272 22:01:59 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:17.837 NULL1 00:10:17.837 22:01:59 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:18.094 22:02:00 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=3883534 00:10:18.094 22:02:00 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:18.094 22:02:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3883534 00:10:18.094 22:02:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.353 EAL: No free 2048 kB hugepages reported on node 1 00:10:19.726 Read completed with error (sct=0, sc=11) 00:10:19.726 22:02:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.726 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.726 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.726 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.726 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.726 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.726 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.726 22:02:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:10:19.726 22:02:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:19.983 true 00:10:19.983 22:02:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3883534 00:10:19.983 22:02:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.917 22:02:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.175 22:02:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:10:21.175 22:02:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:21.433 true 00:10:21.691 22:02:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3883534 00:10:21.691 22:02:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.294 22:02:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.294 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.294 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.294 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.294 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.552 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.552 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.552 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.552 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.552 22:02:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:10:22.552 22:02:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:23.118 true 00:10:23.118 22:02:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3883534 00:10:23.118 22:02:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.683 22:02:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.941 22:02:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:10:23.941 22:02:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:24.507 true 00:10:24.507 22:02:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3883534 00:10:24.507 22:02:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.877 22:02:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.442 22:02:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:10:26.442 22:02:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:26.442 true 00:10:26.442 22:02:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3883534 00:10:26.442 22:02:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.007 22:02:09 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.572 22:02:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:10:27.572 22:02:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:27.572 true 00:10:27.830 22:02:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3883534 00:10:27.830 22:02:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.205 22:02:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.462 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.462 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.462 22:02:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:10:29.462 22:02:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:29.720 true 00:10:29.720 22:02:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3883534 00:10:29.720 22:02:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.653 22:02:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:30.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:30.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:30.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:30.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:30.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:30.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:30.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:30.911 22:02:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:10:30.911 22:02:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:31.169 true 00:10:31.169 22:02:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3883534 00:10:31.169 22:02:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.736 22:02:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.251 22:02:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:10:32.251 22:02:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:32.817 true 00:10:32.817 22:02:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3883534 00:10:32.817 22:02:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.076 22:02:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.642 22:02:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:10:33.642 22:02:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:33.900 true 00:10:33.900 22:02:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3883534 00:10:33.900 22:02:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.274 22:02:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.791 22:02:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:10:35.791 22:02:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:36.049 true 00:10:36.049 22:02:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3883534 00:10:36.049 22:02:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.616 22:02:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.616 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.874 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.874 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.874 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.874 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.874 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.874 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.142 22:02:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:10:37.143 22:02:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:37.406 true 00:10:37.406 22:02:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3883534 00:10:37.406 22:02:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.143 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.143 22:02:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.143 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.143 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.143 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.143 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.400 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.400 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.400 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.400 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.400 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.658 22:02:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:10:38.658 22:02:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:38.916 true 00:10:38.916 22:02:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3883534 00:10:38.916 22:02:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.173 22:02:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.430 22:02:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:10:39.430 22:02:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:40.062 true 00:10:40.062 22:02:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3883534 00:10:40.062 22:02:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.628 22:02:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.628 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.628 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.628 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.193 22:02:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:10:41.193 22:02:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:41.450 true 00:10:41.450 22:02:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3883534 00:10:41.450 22:02:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.708 22:02:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.274 22:02:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:10:42.274 22:02:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:42.532 true 00:10:42.532 22:02:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3883534 00:10:42.532 22:02:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.790 22:02:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.355 22:02:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:10:43.355 22:02:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:43.613 true 00:10:43.613 22:02:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3883534 00:10:43.613 22:02:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.985 22:02:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.500 22:02:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:10:45.500 22:02:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:45.758 true 00:10:45.758 22:02:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3883534 00:10:45.758 22:02:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.131 22:02:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.131 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.131 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.131 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.389 22:02:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:10:47.389 22:02:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:47.955 true 00:10:47.955 22:02:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3883534 00:10:47.955 22:02:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.522 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.522 22:02:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.522 Initializing NVMe Controllers 00:10:48.522 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:48.522 Controller IO queue size 128, less than required. 00:10:48.522 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:48.522 Controller IO queue size 128, less than required. 00:10:48.522 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:48.522 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:48.522 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:48.522 Initialization complete. Launching workers. 00:10:48.522 ======================================================== 00:10:48.522 Latency(us) 00:10:48.522 Device Information : IOPS MiB/s Average min max 00:10:48.522 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3147.44 1.54 25371.05 2134.67 1016105.67 00:10:48.522 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11307.37 5.52 11320.16 3281.08 407280.00 00:10:48.522 ======================================================== 00:10:48.522 Total : 14454.81 7.06 14379.65 2134.67 1016105.67 00:10:48.522 00:10:49.088 22:02:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:10:49.088 22:02:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:49.346 true 00:10:49.346 22:02:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3883534 00:10:49.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (3883534) - No such process 00:10:49.346 22:02:31 -- target/ns_hotplug_stress.sh@44 -- # wait 3883534 00:10:49.346 22:02:31 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:49.346 22:02:31 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:10:49.346 22:02:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:49.346 22:02:31 -- nvmf/common.sh@117 -- # sync 00:10:49.346 22:02:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:49.346 22:02:31 -- nvmf/common.sh@120 -- # set +e 00:10:49.346 22:02:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:49.346 22:02:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:49.346 rmmod nvme_tcp 00:10:49.346 rmmod nvme_fabrics 00:10:49.604 rmmod nvme_keyring 00:10:49.604 22:02:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:49.604 22:02:31 -- nvmf/common.sh@124 -- # set -e 00:10:49.604 22:02:31 -- nvmf/common.sh@125 -- # return 0 00:10:49.604 22:02:31 -- nvmf/common.sh@478 -- # '[' -n 3883071 ']' 00:10:49.604 22:02:31 -- nvmf/common.sh@479 -- # killprocess 3883071 00:10:49.604 22:02:31 -- common/autotest_common.sh@936 -- # '[' -z 3883071 ']' 00:10:49.604 22:02:31 -- common/autotest_common.sh@940 -- # kill -0 3883071 00:10:49.604 22:02:31 -- common/autotest_common.sh@941 -- # uname 00:10:49.604 22:02:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:49.604 22:02:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3883071 00:10:49.604 22:02:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:49.604 22:02:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:49.604 22:02:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3883071' 00:10:49.604 killing process with pid 3883071 00:10:49.604 22:02:31 -- common/autotest_common.sh@955 -- # kill 3883071 00:10:49.604 [2024-04-24 22:02:31.643214] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:49.604 22:02:31 -- common/autotest_common.sh@960 -- # wait 3883071 00:10:49.863 22:02:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:49.863 22:02:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:49.863 22:02:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:49.863 22:02:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:49.863 22:02:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:49.863 22:02:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.863 22:02:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:49.863 22:02:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.763 22:02:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:51.763 00:10:51.763 real 0m40.388s 00:10:51.763 user 2m37.277s 00:10:51.763 sys 0m10.738s 00:10:51.763 22:02:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:51.763 22:02:33 -- common/autotest_common.sh@10 -- # set +x 00:10:51.763 ************************************ 00:10:51.763 END TEST nvmf_ns_hotplug_stress 00:10:51.763 ************************************ 00:10:51.763 22:02:34 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:51.763 22:02:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:51.763 22:02:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:51.763 22:02:34 -- common/autotest_common.sh@10 -- # set +x 00:10:52.021 ************************************ 00:10:52.021 START TEST nvmf_connect_stress 00:10:52.021 ************************************ 00:10:52.021 22:02:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:52.021 * Looking for test storage... 00:10:52.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.021 22:02:34 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:52.021 22:02:34 -- nvmf/common.sh@7 -- # uname -s 00:10:52.021 22:02:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:52.021 22:02:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:52.021 22:02:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:52.021 22:02:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:52.021 22:02:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:52.021 22:02:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:52.021 22:02:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:52.021 22:02:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:52.021 22:02:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:52.021 22:02:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:52.021 22:02:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:52.021 22:02:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:52.021 22:02:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:52.021 22:02:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:52.021 22:02:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:52.021 22:02:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:52.021 22:02:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:52.021 22:02:34 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.021 22:02:34 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.021 22:02:34 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.021 22:02:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.021 22:02:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.021 22:02:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.021 22:02:34 -- paths/export.sh@5 -- # export PATH 00:10:52.021 22:02:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.021 22:02:34 -- nvmf/common.sh@47 -- # : 0 00:10:52.021 22:02:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:52.021 22:02:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:52.021 22:02:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:52.021 22:02:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:52.021 22:02:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:52.021 22:02:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:52.021 22:02:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:52.021 22:02:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:52.021 22:02:34 -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:52.021 22:02:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:52.021 22:02:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:52.021 22:02:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:52.021 22:02:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:52.021 22:02:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:52.021 22:02:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.021 22:02:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:52.021 22:02:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.021 22:02:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:52.021 22:02:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:52.021 22:02:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:52.021 22:02:34 -- common/autotest_common.sh@10 -- # set +x 00:10:54.551 22:02:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:54.551 22:02:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:54.551 22:02:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:54.551 22:02:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:54.551 22:02:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:54.551 22:02:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:54.551 22:02:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:54.551 22:02:36 -- nvmf/common.sh@295 -- # net_devs=() 00:10:54.551 22:02:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:54.551 22:02:36 -- nvmf/common.sh@296 -- # e810=() 00:10:54.551 22:02:36 -- nvmf/common.sh@296 -- # local -ga e810 00:10:54.551 22:02:36 -- nvmf/common.sh@297 -- # x722=() 00:10:54.551 22:02:36 -- nvmf/common.sh@297 -- # local -ga x722 00:10:54.551 22:02:36 -- nvmf/common.sh@298 -- # mlx=() 00:10:54.551 22:02:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:54.551 22:02:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:54.551 22:02:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:54.551 22:02:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:54.551 22:02:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:54.551 22:02:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:54.551 22:02:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:54.551 22:02:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:54.551 22:02:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:54.551 22:02:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:54.551 22:02:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:54.551 22:02:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:54.551 22:02:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:54.551 22:02:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:54.551 22:02:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:54.551 22:02:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:54.551 22:02:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:54.551 22:02:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:54.551 22:02:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:54.551 22:02:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:54.551 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:54.551 22:02:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:54.551 22:02:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:54.551 22:02:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:54.552 22:02:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:54.552 22:02:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:54.552 22:02:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:54.552 22:02:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:54.552 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:54.552 22:02:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:54.552 22:02:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:54.552 22:02:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:54.552 22:02:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:54.552 22:02:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:54.552 22:02:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:54.552 22:02:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:54.552 22:02:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:54.552 22:02:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:54.552 22:02:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.552 22:02:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:54.552 22:02:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.552 22:02:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:54.552 Found net devices under 0000:84:00.0: cvl_0_0 00:10:54.552 22:02:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.552 22:02:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:54.552 22:02:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.552 22:02:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:54.552 22:02:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.552 22:02:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:54.552 Found net devices under 0000:84:00.1: cvl_0_1 00:10:54.552 22:02:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.552 22:02:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:54.552 22:02:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:54.552 22:02:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:54.552 22:02:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:54.552 22:02:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:54.552 22:02:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:54.552 22:02:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:54.552 22:02:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:54.552 22:02:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:54.552 22:02:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:54.552 22:02:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:54.552 22:02:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:54.552 22:02:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:54.552 22:02:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:54.552 22:02:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:54.552 22:02:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:54.552 22:02:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:54.552 22:02:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:54.552 22:02:36 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:54.552 22:02:36 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:54.552 22:02:36 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:54.552 22:02:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:54.552 22:02:36 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:54.552 22:02:36 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:54.552 22:02:36 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:54.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:54.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:10:54.552 00:10:54.552 --- 10.0.0.2 ping statistics --- 00:10:54.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.552 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:10:54.552 22:02:36 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:54.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:54.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:10:54.552 00:10:54.552 --- 10.0.0.1 ping statistics --- 00:10:54.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.552 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:10:54.552 22:02:36 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:54.552 22:02:36 -- nvmf/common.sh@411 -- # return 0 00:10:54.552 22:02:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:54.552 22:02:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:54.552 22:02:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:54.552 22:02:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:54.552 22:02:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:54.552 22:02:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:54.552 22:02:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:54.552 22:02:36 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:54.552 22:02:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:54.552 22:02:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:54.552 22:02:36 -- common/autotest_common.sh@10 -- # set +x 00:10:54.552 22:02:36 -- nvmf/common.sh@470 -- # nvmfpid=3889217 00:10:54.552 22:02:36 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:54.552 22:02:36 -- nvmf/common.sh@471 -- # waitforlisten 3889217 00:10:54.552 22:02:36 -- common/autotest_common.sh@817 -- # '[' -z 3889217 ']' 00:10:54.552 22:02:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.552 22:02:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:54.552 22:02:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.552 22:02:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:54.552 22:02:36 -- common/autotest_common.sh@10 -- # set +x 00:10:54.552 [2024-04-24 22:02:36.740608] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:10:54.552 [2024-04-24 22:02:36.740718] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.552 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.811 [2024-04-24 22:02:36.824856] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:54.811 [2024-04-24 22:02:36.955017] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.811 [2024-04-24 22:02:36.955079] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.811 [2024-04-24 22:02:36.955096] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:54.811 [2024-04-24 22:02:36.955110] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:54.811 [2024-04-24 22:02:36.955123] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.811 [2024-04-24 22:02:36.956420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.811 [2024-04-24 22:02:36.956496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:54.811 [2024-04-24 22:02:36.956501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.069 22:02:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:55.069 22:02:37 -- common/autotest_common.sh@850 -- # return 0 00:10:55.069 22:02:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:55.069 22:02:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:55.069 22:02:37 -- common/autotest_common.sh@10 -- # set +x 00:10:55.069 22:02:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.069 22:02:37 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:55.069 22:02:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.069 22:02:37 -- common/autotest_common.sh@10 -- # set +x 00:10:55.070 [2024-04-24 22:02:37.104868] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:55.070 22:02:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.070 22:02:37 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:55.070 22:02:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.070 22:02:37 -- common/autotest_common.sh@10 -- # set +x 00:10:55.070 22:02:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.070 22:02:37 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:55.070 22:02:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.070 22:02:37 -- common/autotest_common.sh@10 -- # set +x 00:10:55.070 [2024-04-24 22:02:37.122428] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:55.070 [2024-04-24 22:02:37.133568] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:55.070 22:02:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.070 22:02:37 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:55.070 22:02:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.070 22:02:37 -- common/autotest_common.sh@10 -- # set +x 00:10:55.070 NULL1 00:10:55.070 22:02:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.070 22:02:37 -- target/connect_stress.sh@21 -- # PERF_PID=3889362 00:10:55.070 22:02:37 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:55.070 22:02:37 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:55.070 22:02:37 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:55.070 22:02:37 -- target/connect_stress.sh@27 -- # seq 1 20 00:10:55.070 22:02:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.070 22:02:37 -- target/connect_stress.sh@28 -- # cat 00:10:55.070 22:02:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.070 22:02:37 -- target/connect_stress.sh@28 -- # cat 00:10:55.070 22:02:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.070 22:02:37 -- target/connect_stress.sh@28 -- # cat 00:10:55.070 22:02:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.070 22:02:37 -- target/connect_stress.sh@28 -- # cat 00:10:55.070 22:02:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.070 22:02:37 -- target/connect_stress.sh@28 -- # cat 00:10:55.070 22:02:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.070 22:02:37 -- target/connect_stress.sh@28 -- # cat 00:10:55.070 22:02:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.070 22:02:37 -- target/connect_stress.sh@28 -- # cat 00:10:55.070 22:02:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.070 22:02:37 -- target/connect_stress.sh@28 -- # cat 00:10:55.070 22:02:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.070 22:02:37 -- target/connect_stress.sh@28 -- # cat 00:10:55.070 22:02:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.070 22:02:37 -- target/connect_stress.sh@28 -- # cat 00:10:55.070 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.070 22:02:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.070 22:02:37 -- target/connect_stress.sh@28 -- # cat 00:10:55.070 22:02:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.070 22:02:37 -- target/connect_stress.sh@28 -- # cat 00:10:55.070 22:02:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.070 22:02:37 -- target/connect_stress.sh@28 -- # cat 00:10:55.070 22:02:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.070 22:02:37 -- target/connect_stress.sh@28 -- # cat 00:10:55.070 22:02:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.070 22:02:37 -- target/connect_stress.sh@28 -- # cat 00:10:55.070 22:02:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.070 22:02:37 -- target/connect_stress.sh@28 -- # cat 00:10:55.070 22:02:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.070 22:02:37 -- target/connect_stress.sh@28 -- # cat 00:10:55.070 22:02:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.070 22:02:37 -- target/connect_stress.sh@28 -- # cat 00:10:55.070 22:02:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.070 22:02:37 -- target/connect_stress.sh@28 -- # cat 00:10:55.070 22:02:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.070 22:02:37 -- target/connect_stress.sh@28 -- # cat 00:10:55.070 22:02:37 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:10:55.070 22:02:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.070 22:02:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.070 22:02:37 -- common/autotest_common.sh@10 -- # set +x 00:10:55.329 22:02:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.329 22:02:37 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:10:55.329 22:02:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.329 22:02:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.329 22:02:37 -- common/autotest_common.sh@10 -- # set +x 00:10:55.896 22:02:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.896 22:02:37 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:10:55.896 22:02:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.896 22:02:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.896 22:02:37 -- common/autotest_common.sh@10 -- # set +x 00:10:56.154 22:02:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:56.154 22:02:38 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:10:56.154 22:02:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.154 22:02:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:56.154 22:02:38 -- common/autotest_common.sh@10 -- # set +x 00:10:56.412 22:02:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:56.412 22:02:38 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:10:56.412 22:02:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.412 22:02:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:56.412 22:02:38 -- common/autotest_common.sh@10 -- # set +x 00:10:56.670 22:02:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:56.670 22:02:38 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:10:56.670 22:02:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.670 22:02:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:56.670 22:02:38 -- common/autotest_common.sh@10 -- # set +x 00:10:56.927 22:02:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:56.927 22:02:39 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:10:56.927 22:02:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.927 22:02:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:56.927 22:02:39 -- common/autotest_common.sh@10 -- # set +x 00:10:57.493 22:02:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:57.493 22:02:39 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:10:57.493 22:02:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.493 22:02:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:57.493 22:02:39 -- common/autotest_common.sh@10 -- # set +x 00:10:57.751 22:02:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:57.751 22:02:39 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:10:57.751 22:02:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.751 22:02:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:57.751 22:02:39 -- common/autotest_common.sh@10 -- # set +x 00:10:58.009 22:02:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:58.009 22:02:40 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:10:58.009 22:02:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.009 22:02:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:58.009 22:02:40 -- common/autotest_common.sh@10 -- # set +x 00:10:58.267 22:02:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:58.267 22:02:40 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:10:58.267 22:02:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.267 22:02:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:58.267 22:02:40 -- common/autotest_common.sh@10 -- # set +x 00:10:58.526 22:02:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:58.526 22:02:40 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:10:58.526 22:02:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.526 22:02:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:58.526 22:02:40 -- common/autotest_common.sh@10 -- # set +x 00:10:58.828 22:02:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:58.828 22:02:41 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:10:58.828 22:02:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.828 22:02:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:58.828 22:02:41 -- common/autotest_common.sh@10 -- # set +x 00:10:59.394 22:02:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:59.394 22:02:41 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:10:59.394 22:02:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.394 22:02:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:59.394 22:02:41 -- common/autotest_common.sh@10 -- # set +x 00:10:59.652 22:02:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:59.652 22:02:41 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:10:59.652 22:02:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.652 22:02:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:59.652 22:02:41 -- common/autotest_common.sh@10 -- # set +x 00:10:59.910 22:02:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:59.910 22:02:42 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:10:59.910 22:02:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.910 22:02:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:59.910 22:02:42 -- common/autotest_common.sh@10 -- # set +x 00:11:00.169 22:02:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:00.169 22:02:42 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:11:00.169 22:02:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.169 22:02:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:00.169 22:02:42 -- common/autotest_common.sh@10 -- # set +x 00:11:00.427 22:02:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:00.427 22:02:42 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:11:00.427 22:02:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.427 22:02:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:00.427 22:02:42 -- common/autotest_common.sh@10 -- # set +x 00:11:00.993 22:02:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:00.993 22:02:42 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:11:00.993 22:02:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.993 22:02:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:00.993 22:02:42 -- common/autotest_common.sh@10 -- # set +x 00:11:01.251 22:02:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:01.251 22:02:43 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:11:01.251 22:02:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.251 22:02:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:01.251 22:02:43 -- common/autotest_common.sh@10 -- # set +x 00:11:01.509 22:02:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:01.509 22:02:43 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:11:01.509 22:02:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.509 22:02:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:01.509 22:02:43 -- common/autotest_common.sh@10 -- # set +x 00:11:01.766 22:02:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:01.766 22:02:43 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:11:01.766 22:02:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.766 22:02:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:01.766 22:02:43 -- common/autotest_common.sh@10 -- # set +x 00:11:02.024 22:02:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:02.024 22:02:44 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:11:02.024 22:02:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.024 22:02:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:02.024 22:02:44 -- common/autotest_common.sh@10 -- # set +x 00:11:02.589 22:02:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:02.589 22:02:44 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:11:02.589 22:02:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.589 22:02:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:02.589 22:02:44 -- common/autotest_common.sh@10 -- # set +x 00:11:02.847 22:02:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:02.847 22:02:44 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:11:02.847 22:02:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.847 22:02:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:02.847 22:02:44 -- common/autotest_common.sh@10 -- # set +x 00:11:03.105 22:02:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.105 22:02:45 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:11:03.105 22:02:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.105 22:02:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.105 22:02:45 -- common/autotest_common.sh@10 -- # set +x 00:11:03.363 22:02:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.363 22:02:45 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:11:03.363 22:02:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.363 22:02:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.363 22:02:45 -- common/autotest_common.sh@10 -- # set +x 00:11:03.621 22:02:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.621 22:02:45 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:11:03.621 22:02:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.621 22:02:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.621 22:02:45 -- common/autotest_common.sh@10 -- # set +x 00:11:04.187 22:02:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.187 22:02:46 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:11:04.187 22:02:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.187 22:02:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.187 22:02:46 -- common/autotest_common.sh@10 -- # set +x 00:11:04.445 22:02:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.445 22:02:46 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:11:04.445 22:02:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.445 22:02:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.445 22:02:46 -- common/autotest_common.sh@10 -- # set +x 00:11:04.704 22:02:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.704 22:02:46 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:11:04.704 22:02:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.704 22:02:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.704 22:02:46 -- common/autotest_common.sh@10 -- # set +x 00:11:04.962 22:02:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.962 22:02:47 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:11:04.962 22:02:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.962 22:02:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.962 22:02:47 -- common/autotest_common.sh@10 -- # set +x 00:11:05.221 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:05.479 22:02:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:05.479 22:02:47 -- target/connect_stress.sh@34 -- # kill -0 3889362 00:11:05.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3889362) - No such process 00:11:05.479 22:02:47 -- target/connect_stress.sh@38 -- # wait 3889362 00:11:05.479 22:02:47 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:05.479 22:02:47 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:05.479 22:02:47 -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:05.479 22:02:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:05.479 22:02:47 -- nvmf/common.sh@117 -- # sync 00:11:05.479 22:02:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:05.479 22:02:47 -- nvmf/common.sh@120 -- # set +e 00:11:05.479 22:02:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:05.479 22:02:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:05.479 rmmod nvme_tcp 00:11:05.479 rmmod nvme_fabrics 00:11:05.479 rmmod nvme_keyring 00:11:05.479 22:02:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:05.479 22:02:47 -- nvmf/common.sh@124 -- # set -e 00:11:05.479 22:02:47 -- nvmf/common.sh@125 -- # return 0 00:11:05.479 22:02:47 -- nvmf/common.sh@478 -- # '[' -n 3889217 ']' 00:11:05.479 22:02:47 -- nvmf/common.sh@479 -- # killprocess 3889217 00:11:05.479 22:02:47 -- common/autotest_common.sh@936 -- # '[' -z 3889217 ']' 00:11:05.479 22:02:47 -- common/autotest_common.sh@940 -- # kill -0 3889217 00:11:05.479 22:02:47 -- common/autotest_common.sh@941 -- # uname 00:11:05.479 22:02:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:05.479 22:02:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3889217 00:11:05.479 22:02:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:05.479 22:02:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:05.479 22:02:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3889217' 00:11:05.479 killing process with pid 3889217 00:11:05.479 22:02:47 -- common/autotest_common.sh@955 -- # kill 3889217 00:11:05.479 [2024-04-24 22:02:47.584039] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:05.479 22:02:47 -- common/autotest_common.sh@960 -- # wait 3889217 00:11:05.737 22:02:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:05.737 22:02:47 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:05.737 22:02:47 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:05.737 22:02:47 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:05.737 22:02:47 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:05.737 22:02:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.737 22:02:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:05.738 22:02:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.268 22:02:49 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:08.268 00:11:08.268 real 0m15.817s 00:11:08.268 user 0m38.500s 00:11:08.268 sys 0m6.503s 00:11:08.268 22:02:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:08.268 22:02:49 -- common/autotest_common.sh@10 -- # set +x 00:11:08.268 ************************************ 00:11:08.268 END TEST nvmf_connect_stress 00:11:08.268 ************************************ 00:11:08.268 22:02:49 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:08.268 22:02:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:08.268 22:02:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:08.268 22:02:49 -- common/autotest_common.sh@10 -- # set +x 00:11:08.268 ************************************ 00:11:08.268 START TEST nvmf_fused_ordering 00:11:08.268 ************************************ 00:11:08.268 22:02:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:08.268 * Looking for test storage... 00:11:08.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:08.268 22:02:50 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:08.268 22:02:50 -- nvmf/common.sh@7 -- # uname -s 00:11:08.268 22:02:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.268 22:02:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.268 22:02:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.268 22:02:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.268 22:02:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.268 22:02:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.268 22:02:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.268 22:02:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.268 22:02:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.268 22:02:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.268 22:02:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:08.268 22:02:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:08.268 22:02:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.268 22:02:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.268 22:02:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:08.268 22:02:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.268 22:02:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:08.268 22:02:50 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.268 22:02:50 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.268 22:02:50 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.268 22:02:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.268 22:02:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.268 22:02:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.268 22:02:50 -- paths/export.sh@5 -- # export PATH 00:11:08.268 22:02:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.268 22:02:50 -- nvmf/common.sh@47 -- # : 0 00:11:08.268 22:02:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:08.268 22:02:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:08.268 22:02:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.268 22:02:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.268 22:02:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.268 22:02:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:08.268 22:02:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:08.268 22:02:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:08.268 22:02:50 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:08.268 22:02:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:08.268 22:02:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.268 22:02:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:08.268 22:02:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:08.268 22:02:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:08.268 22:02:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.268 22:02:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:08.268 22:02:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.268 22:02:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:08.268 22:02:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:08.268 22:02:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:08.268 22:02:50 -- common/autotest_common.sh@10 -- # set +x 00:11:10.798 22:02:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:10.798 22:02:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:10.798 22:02:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:10.798 22:02:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:10.798 22:02:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:10.798 22:02:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:10.798 22:02:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:10.798 22:02:52 -- nvmf/common.sh@295 -- # net_devs=() 00:11:10.798 22:02:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:10.798 22:02:52 -- nvmf/common.sh@296 -- # e810=() 00:11:10.798 22:02:52 -- nvmf/common.sh@296 -- # local -ga e810 00:11:10.798 22:02:52 -- nvmf/common.sh@297 -- # x722=() 00:11:10.798 22:02:52 -- nvmf/common.sh@297 -- # local -ga x722 00:11:10.798 22:02:52 -- nvmf/common.sh@298 -- # mlx=() 00:11:10.798 22:02:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:10.798 22:02:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:10.798 22:02:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:10.798 22:02:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:10.798 22:02:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:10.798 22:02:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:10.798 22:02:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:10.798 22:02:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:10.798 22:02:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:10.798 22:02:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:10.798 22:02:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:10.798 22:02:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:10.798 22:02:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:10.798 22:02:52 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:10.798 22:02:52 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:10.798 22:02:52 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:10.798 22:02:52 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:10.798 22:02:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:10.798 22:02:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:10.798 22:02:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:10.798 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:10.798 22:02:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:10.798 22:02:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:10.798 22:02:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.798 22:02:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.798 22:02:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:10.798 22:02:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:10.798 22:02:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:10.798 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:10.798 22:02:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:10.798 22:02:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:10.798 22:02:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.798 22:02:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.798 22:02:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:10.798 22:02:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:10.798 22:02:52 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:10.798 22:02:52 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:10.798 22:02:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:10.798 22:02:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.798 22:02:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:10.798 22:02:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.798 22:02:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:10.798 Found net devices under 0000:84:00.0: cvl_0_0 00:11:10.798 22:02:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.798 22:02:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:10.798 22:02:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.798 22:02:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:10.798 22:02:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.798 22:02:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:10.798 Found net devices under 0000:84:00.1: cvl_0_1 00:11:10.798 22:02:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.798 22:02:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:10.798 22:02:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:10.798 22:02:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:10.798 22:02:52 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:10.798 22:02:52 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:10.798 22:02:52 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:10.798 22:02:52 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:10.798 22:02:52 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:10.798 22:02:52 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:10.798 22:02:52 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:10.798 22:02:52 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:10.798 22:02:52 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:10.798 22:02:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:10.798 22:02:52 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:10.798 22:02:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:10.798 22:02:52 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:10.798 22:02:52 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:10.798 22:02:52 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:10.798 22:02:52 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:10.798 22:02:52 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:10.798 22:02:52 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:10.798 22:02:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:10.798 22:02:52 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:10.798 22:02:52 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:10.798 22:02:52 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:10.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:10.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:11:10.798 00:11:10.798 --- 10.0.0.2 ping statistics --- 00:11:10.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.798 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:11:10.798 22:02:52 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:10.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:10.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:11:10.798 00:11:10.798 --- 10.0.0.1 ping statistics --- 00:11:10.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.798 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:11:10.798 22:02:52 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:10.798 22:02:52 -- nvmf/common.sh@411 -- # return 0 00:11:10.798 22:02:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:10.798 22:02:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:10.798 22:02:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:10.798 22:02:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:10.798 22:02:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:10.798 22:02:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:10.798 22:02:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:10.798 22:02:52 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:10.798 22:02:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:10.798 22:02:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:10.798 22:02:52 -- common/autotest_common.sh@10 -- # set +x 00:11:10.798 22:02:52 -- nvmf/common.sh@470 -- # nvmfpid=3892544 00:11:10.798 22:02:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:10.798 22:02:52 -- nvmf/common.sh@471 -- # waitforlisten 3892544 00:11:10.798 22:02:52 -- common/autotest_common.sh@817 -- # '[' -z 3892544 ']' 00:11:10.798 22:02:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.798 22:02:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:10.798 22:02:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.798 22:02:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:10.798 22:02:52 -- common/autotest_common.sh@10 -- # set +x 00:11:10.798 [2024-04-24 22:02:52.726458] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:11:10.798 [2024-04-24 22:02:52.726553] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.798 EAL: No free 2048 kB hugepages reported on node 1 00:11:10.798 [2024-04-24 22:02:52.808671] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.798 [2024-04-24 22:02:52.941650] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.798 [2024-04-24 22:02:52.941738] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.798 [2024-04-24 22:02:52.941757] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:10.799 [2024-04-24 22:02:52.941772] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:10.799 [2024-04-24 22:02:52.941786] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.799 [2024-04-24 22:02:52.941822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.056 22:02:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:11.056 22:02:53 -- common/autotest_common.sh@850 -- # return 0 00:11:11.056 22:02:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:11.056 22:02:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:11.056 22:02:53 -- common/autotest_common.sh@10 -- # set +x 00:11:11.056 22:02:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.056 22:02:53 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:11.056 22:02:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:11.056 22:02:53 -- common/autotest_common.sh@10 -- # set +x 00:11:11.056 [2024-04-24 22:02:53.108240] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:11.056 22:02:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:11.056 22:02:53 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:11.056 22:02:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:11.056 22:02:53 -- common/autotest_common.sh@10 -- # set +x 00:11:11.056 22:02:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:11.056 22:02:53 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:11.056 22:02:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:11.056 22:02:53 -- common/autotest_common.sh@10 -- # set +x 00:11:11.056 [2024-04-24 22:02:53.124182] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:11.056 [2024-04-24 22:02:53.124547] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:11.056 22:02:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:11.056 22:02:53 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:11.056 22:02:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:11.056 22:02:53 -- common/autotest_common.sh@10 -- # set +x 00:11:11.056 NULL1 00:11:11.056 22:02:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:11.056 22:02:53 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:11.056 22:02:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:11.056 22:02:53 -- common/autotest_common.sh@10 -- # set +x 00:11:11.056 22:02:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:11.056 22:02:53 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:11.056 22:02:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:11.056 22:02:53 -- common/autotest_common.sh@10 -- # set +x 00:11:11.056 22:02:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:11.056 22:02:53 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:11.056 [2024-04-24 22:02:53.172151] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:11:11.057 [2024-04-24 22:02:53.172209] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3892680 ] 00:11:11.057 EAL: No free 2048 kB hugepages reported on node 1 00:11:11.622 Attached to nqn.2016-06.io.spdk:cnode1 00:11:11.622 Namespace ID: 1 size: 1GB 00:11:11.622 fused_ordering(0) 00:11:11.622 fused_ordering(1) 00:11:11.622 fused_ordering(2) 00:11:11.622 fused_ordering(3) 00:11:11.622 fused_ordering(4) 00:11:11.622 fused_ordering(5) 00:11:11.622 fused_ordering(6) 00:11:11.622 fused_ordering(7) 00:11:11.622 fused_ordering(8) 00:11:11.622 fused_ordering(9) 00:11:11.622 fused_ordering(10) 00:11:11.622 fused_ordering(11) 00:11:11.622 fused_ordering(12) 00:11:11.622 fused_ordering(13) 00:11:11.622 fused_ordering(14) 00:11:11.622 fused_ordering(15) 00:11:11.622 fused_ordering(16) 00:11:11.622 fused_ordering(17) 00:11:11.622 fused_ordering(18) 00:11:11.622 fused_ordering(19) 00:11:11.622 fused_ordering(20) 00:11:11.622 fused_ordering(21) 00:11:11.622 fused_ordering(22) 00:11:11.622 fused_ordering(23) 00:11:11.622 fused_ordering(24) 00:11:11.622 fused_ordering(25) 00:11:11.622 fused_ordering(26) 00:11:11.622 fused_ordering(27) 00:11:11.622 fused_ordering(28) 00:11:11.622 fused_ordering(29) 00:11:11.622 fused_ordering(30) 00:11:11.622 fused_ordering(31) 00:11:11.622 fused_ordering(32) 00:11:11.622 fused_ordering(33) 00:11:11.622 fused_ordering(34) 00:11:11.622 fused_ordering(35) 00:11:11.622 fused_ordering(36) 00:11:11.622 fused_ordering(37) 00:11:11.622 fused_ordering(38) 00:11:11.622 fused_ordering(39) 00:11:11.622 fused_ordering(40) 00:11:11.622 fused_ordering(41) 00:11:11.622 fused_ordering(42) 00:11:11.622 fused_ordering(43) 00:11:11.622 fused_ordering(44) 00:11:11.622 fused_ordering(45) 00:11:11.622 fused_ordering(46) 00:11:11.622 fused_ordering(47) 00:11:11.622 fused_ordering(48) 00:11:11.622 fused_ordering(49) 00:11:11.622 fused_ordering(50) 00:11:11.622 fused_ordering(51) 00:11:11.622 fused_ordering(52) 00:11:11.622 fused_ordering(53) 00:11:11.622 fused_ordering(54) 00:11:11.622 fused_ordering(55) 00:11:11.622 fused_ordering(56) 00:11:11.622 fused_ordering(57) 00:11:11.622 fused_ordering(58) 00:11:11.622 fused_ordering(59) 00:11:11.622 fused_ordering(60) 00:11:11.622 fused_ordering(61) 00:11:11.622 fused_ordering(62) 00:11:11.622 fused_ordering(63) 00:11:11.622 fused_ordering(64) 00:11:11.622 fused_ordering(65) 00:11:11.622 fused_ordering(66) 00:11:11.622 fused_ordering(67) 00:11:11.622 fused_ordering(68) 00:11:11.622 fused_ordering(69) 00:11:11.622 fused_ordering(70) 00:11:11.622 fused_ordering(71) 00:11:11.622 fused_ordering(72) 00:11:11.622 fused_ordering(73) 00:11:11.622 fused_ordering(74) 00:11:11.622 fused_ordering(75) 00:11:11.622 fused_ordering(76) 00:11:11.622 fused_ordering(77) 00:11:11.622 fused_ordering(78) 00:11:11.623 fused_ordering(79) 00:11:11.623 fused_ordering(80) 00:11:11.623 fused_ordering(81) 00:11:11.623 fused_ordering(82) 00:11:11.623 fused_ordering(83) 00:11:11.623 fused_ordering(84) 00:11:11.623 fused_ordering(85) 00:11:11.623 fused_ordering(86) 00:11:11.623 fused_ordering(87) 00:11:11.623 fused_ordering(88) 00:11:11.623 fused_ordering(89) 00:11:11.623 fused_ordering(90) 00:11:11.623 fused_ordering(91) 00:11:11.623 fused_ordering(92) 00:11:11.623 fused_ordering(93) 00:11:11.623 fused_ordering(94) 00:11:11.623 fused_ordering(95) 00:11:11.623 fused_ordering(96) 00:11:11.623 fused_ordering(97) 00:11:11.623 fused_ordering(98) 00:11:11.623 fused_ordering(99) 00:11:11.623 fused_ordering(100) 00:11:11.623 fused_ordering(101) 00:11:11.623 fused_ordering(102) 00:11:11.623 fused_ordering(103) 00:11:11.623 fused_ordering(104) 00:11:11.623 fused_ordering(105) 00:11:11.623 fused_ordering(106) 00:11:11.623 fused_ordering(107) 00:11:11.623 fused_ordering(108) 00:11:11.623 fused_ordering(109) 00:11:11.623 fused_ordering(110) 00:11:11.623 fused_ordering(111) 00:11:11.623 fused_ordering(112) 00:11:11.623 fused_ordering(113) 00:11:11.623 fused_ordering(114) 00:11:11.623 fused_ordering(115) 00:11:11.623 fused_ordering(116) 00:11:11.623 fused_ordering(117) 00:11:11.623 fused_ordering(118) 00:11:11.623 fused_ordering(119) 00:11:11.623 fused_ordering(120) 00:11:11.623 fused_ordering(121) 00:11:11.623 fused_ordering(122) 00:11:11.623 fused_ordering(123) 00:11:11.623 fused_ordering(124) 00:11:11.623 fused_ordering(125) 00:11:11.623 fused_ordering(126) 00:11:11.623 fused_ordering(127) 00:11:11.623 fused_ordering(128) 00:11:11.623 fused_ordering(129) 00:11:11.623 fused_ordering(130) 00:11:11.623 fused_ordering(131) 00:11:11.623 fused_ordering(132) 00:11:11.623 fused_ordering(133) 00:11:11.623 fused_ordering(134) 00:11:11.623 fused_ordering(135) 00:11:11.623 fused_ordering(136) 00:11:11.623 fused_ordering(137) 00:11:11.623 fused_ordering(138) 00:11:11.623 fused_ordering(139) 00:11:11.623 fused_ordering(140) 00:11:11.623 fused_ordering(141) 00:11:11.623 fused_ordering(142) 00:11:11.623 fused_ordering(143) 00:11:11.623 fused_ordering(144) 00:11:11.623 fused_ordering(145) 00:11:11.623 fused_ordering(146) 00:11:11.623 fused_ordering(147) 00:11:11.623 fused_ordering(148) 00:11:11.623 fused_ordering(149) 00:11:11.623 fused_ordering(150) 00:11:11.623 fused_ordering(151) 00:11:11.623 fused_ordering(152) 00:11:11.623 fused_ordering(153) 00:11:11.623 fused_ordering(154) 00:11:11.623 fused_ordering(155) 00:11:11.623 fused_ordering(156) 00:11:11.623 fused_ordering(157) 00:11:11.623 fused_ordering(158) 00:11:11.623 fused_ordering(159) 00:11:11.623 fused_ordering(160) 00:11:11.623 fused_ordering(161) 00:11:11.623 fused_ordering(162) 00:11:11.623 fused_ordering(163) 00:11:11.623 fused_ordering(164) 00:11:11.623 fused_ordering(165) 00:11:11.623 fused_ordering(166) 00:11:11.623 fused_ordering(167) 00:11:11.623 fused_ordering(168) 00:11:11.623 fused_ordering(169) 00:11:11.623 fused_ordering(170) 00:11:11.623 fused_ordering(171) 00:11:11.623 fused_ordering(172) 00:11:11.623 fused_ordering(173) 00:11:11.623 fused_ordering(174) 00:11:11.623 fused_ordering(175) 00:11:11.623 fused_ordering(176) 00:11:11.623 fused_ordering(177) 00:11:11.623 fused_ordering(178) 00:11:11.623 fused_ordering(179) 00:11:11.623 fused_ordering(180) 00:11:11.623 fused_ordering(181) 00:11:11.623 fused_ordering(182) 00:11:11.623 fused_ordering(183) 00:11:11.623 fused_ordering(184) 00:11:11.623 fused_ordering(185) 00:11:11.623 fused_ordering(186) 00:11:11.623 fused_ordering(187) 00:11:11.623 fused_ordering(188) 00:11:11.623 fused_ordering(189) 00:11:11.623 fused_ordering(190) 00:11:11.623 fused_ordering(191) 00:11:11.623 fused_ordering(192) 00:11:11.623 fused_ordering(193) 00:11:11.623 fused_ordering(194) 00:11:11.623 fused_ordering(195) 00:11:11.623 fused_ordering(196) 00:11:11.623 fused_ordering(197) 00:11:11.623 fused_ordering(198) 00:11:11.623 fused_ordering(199) 00:11:11.623 fused_ordering(200) 00:11:11.623 fused_ordering(201) 00:11:11.623 fused_ordering(202) 00:11:11.623 fused_ordering(203) 00:11:11.623 fused_ordering(204) 00:11:11.623 fused_ordering(205) 00:11:11.882 fused_ordering(206) 00:11:11.882 fused_ordering(207) 00:11:11.882 fused_ordering(208) 00:11:11.882 fused_ordering(209) 00:11:11.882 fused_ordering(210) 00:11:11.882 fused_ordering(211) 00:11:11.882 fused_ordering(212) 00:11:11.882 fused_ordering(213) 00:11:11.882 fused_ordering(214) 00:11:11.882 fused_ordering(215) 00:11:11.882 fused_ordering(216) 00:11:11.882 fused_ordering(217) 00:11:11.882 fused_ordering(218) 00:11:11.882 fused_ordering(219) 00:11:11.882 fused_ordering(220) 00:11:11.882 fused_ordering(221) 00:11:11.882 fused_ordering(222) 00:11:11.882 fused_ordering(223) 00:11:11.882 fused_ordering(224) 00:11:11.882 fused_ordering(225) 00:11:11.882 fused_ordering(226) 00:11:11.882 fused_ordering(227) 00:11:11.882 fused_ordering(228) 00:11:11.882 fused_ordering(229) 00:11:11.882 fused_ordering(230) 00:11:11.882 fused_ordering(231) 00:11:11.882 fused_ordering(232) 00:11:11.882 fused_ordering(233) 00:11:11.882 fused_ordering(234) 00:11:11.882 fused_ordering(235) 00:11:11.882 fused_ordering(236) 00:11:11.882 fused_ordering(237) 00:11:11.882 fused_ordering(238) 00:11:11.882 fused_ordering(239) 00:11:11.882 fused_ordering(240) 00:11:11.882 fused_ordering(241) 00:11:11.882 fused_ordering(242) 00:11:11.882 fused_ordering(243) 00:11:11.882 fused_ordering(244) 00:11:11.882 fused_ordering(245) 00:11:11.882 fused_ordering(246) 00:11:11.882 fused_ordering(247) 00:11:11.882 fused_ordering(248) 00:11:11.882 fused_ordering(249) 00:11:11.882 fused_ordering(250) 00:11:11.882 fused_ordering(251) 00:11:11.882 fused_ordering(252) 00:11:11.882 fused_ordering(253) 00:11:11.882 fused_ordering(254) 00:11:11.882 fused_ordering(255) 00:11:11.882 fused_ordering(256) 00:11:11.882 fused_ordering(257) 00:11:11.882 fused_ordering(258) 00:11:11.882 fused_ordering(259) 00:11:11.882 fused_ordering(260) 00:11:11.882 fused_ordering(261) 00:11:11.882 fused_ordering(262) 00:11:11.882 fused_ordering(263) 00:11:11.882 fused_ordering(264) 00:11:11.882 fused_ordering(265) 00:11:11.882 fused_ordering(266) 00:11:11.882 fused_ordering(267) 00:11:11.882 fused_ordering(268) 00:11:11.882 fused_ordering(269) 00:11:11.882 fused_ordering(270) 00:11:11.882 fused_ordering(271) 00:11:11.882 fused_ordering(272) 00:11:11.882 fused_ordering(273) 00:11:11.882 fused_ordering(274) 00:11:11.882 fused_ordering(275) 00:11:11.882 fused_ordering(276) 00:11:11.882 fused_ordering(277) 00:11:11.882 fused_ordering(278) 00:11:11.882 fused_ordering(279) 00:11:11.882 fused_ordering(280) 00:11:11.882 fused_ordering(281) 00:11:11.882 fused_ordering(282) 00:11:11.882 fused_ordering(283) 00:11:11.882 fused_ordering(284) 00:11:11.882 fused_ordering(285) 00:11:11.882 fused_ordering(286) 00:11:11.882 fused_ordering(287) 00:11:11.882 fused_ordering(288) 00:11:11.882 fused_ordering(289) 00:11:11.882 fused_ordering(290) 00:11:11.882 fused_ordering(291) 00:11:11.882 fused_ordering(292) 00:11:11.882 fused_ordering(293) 00:11:11.882 fused_ordering(294) 00:11:11.882 fused_ordering(295) 00:11:11.882 fused_ordering(296) 00:11:11.882 fused_ordering(297) 00:11:11.882 fused_ordering(298) 00:11:11.882 fused_ordering(299) 00:11:11.882 fused_ordering(300) 00:11:11.882 fused_ordering(301) 00:11:11.882 fused_ordering(302) 00:11:11.882 fused_ordering(303) 00:11:11.882 fused_ordering(304) 00:11:11.882 fused_ordering(305) 00:11:11.882 fused_ordering(306) 00:11:11.882 fused_ordering(307) 00:11:11.882 fused_ordering(308) 00:11:11.882 fused_ordering(309) 00:11:11.882 fused_ordering(310) 00:11:11.882 fused_ordering(311) 00:11:11.882 fused_ordering(312) 00:11:11.882 fused_ordering(313) 00:11:11.882 fused_ordering(314) 00:11:11.882 fused_ordering(315) 00:11:11.882 fused_ordering(316) 00:11:11.882 fused_ordering(317) 00:11:11.882 fused_ordering(318) 00:11:11.882 fused_ordering(319) 00:11:11.882 fused_ordering(320) 00:11:11.882 fused_ordering(321) 00:11:11.882 fused_ordering(322) 00:11:11.882 fused_ordering(323) 00:11:11.882 fused_ordering(324) 00:11:11.882 fused_ordering(325) 00:11:11.882 fused_ordering(326) 00:11:11.882 fused_ordering(327) 00:11:11.882 fused_ordering(328) 00:11:11.882 fused_ordering(329) 00:11:11.882 fused_ordering(330) 00:11:11.882 fused_ordering(331) 00:11:11.882 fused_ordering(332) 00:11:11.882 fused_ordering(333) 00:11:11.882 fused_ordering(334) 00:11:11.882 fused_ordering(335) 00:11:11.882 fused_ordering(336) 00:11:11.882 fused_ordering(337) 00:11:11.882 fused_ordering(338) 00:11:11.882 fused_ordering(339) 00:11:11.882 fused_ordering(340) 00:11:11.882 fused_ordering(341) 00:11:11.882 fused_ordering(342) 00:11:11.882 fused_ordering(343) 00:11:11.882 fused_ordering(344) 00:11:11.882 fused_ordering(345) 00:11:11.882 fused_ordering(346) 00:11:11.882 fused_ordering(347) 00:11:11.882 fused_ordering(348) 00:11:11.882 fused_ordering(349) 00:11:11.882 fused_ordering(350) 00:11:11.882 fused_ordering(351) 00:11:11.882 fused_ordering(352) 00:11:11.882 fused_ordering(353) 00:11:11.882 fused_ordering(354) 00:11:11.882 fused_ordering(355) 00:11:11.882 fused_ordering(356) 00:11:11.882 fused_ordering(357) 00:11:11.882 fused_ordering(358) 00:11:11.882 fused_ordering(359) 00:11:11.882 fused_ordering(360) 00:11:11.882 fused_ordering(361) 00:11:11.882 fused_ordering(362) 00:11:11.882 fused_ordering(363) 00:11:11.882 fused_ordering(364) 00:11:11.882 fused_ordering(365) 00:11:11.882 fused_ordering(366) 00:11:11.882 fused_ordering(367) 00:11:11.882 fused_ordering(368) 00:11:11.882 fused_ordering(369) 00:11:11.882 fused_ordering(370) 00:11:11.882 fused_ordering(371) 00:11:11.882 fused_ordering(372) 00:11:11.882 fused_ordering(373) 00:11:11.882 fused_ordering(374) 00:11:11.882 fused_ordering(375) 00:11:11.882 fused_ordering(376) 00:11:11.882 fused_ordering(377) 00:11:11.882 fused_ordering(378) 00:11:11.882 fused_ordering(379) 00:11:11.882 fused_ordering(380) 00:11:11.882 fused_ordering(381) 00:11:11.882 fused_ordering(382) 00:11:11.882 fused_ordering(383) 00:11:11.882 fused_ordering(384) 00:11:11.882 fused_ordering(385) 00:11:11.882 fused_ordering(386) 00:11:11.882 fused_ordering(387) 00:11:11.882 fused_ordering(388) 00:11:11.882 fused_ordering(389) 00:11:11.882 fused_ordering(390) 00:11:11.882 fused_ordering(391) 00:11:11.882 fused_ordering(392) 00:11:11.882 fused_ordering(393) 00:11:11.882 fused_ordering(394) 00:11:11.882 fused_ordering(395) 00:11:11.882 fused_ordering(396) 00:11:11.882 fused_ordering(397) 00:11:11.882 fused_ordering(398) 00:11:11.882 fused_ordering(399) 00:11:11.882 fused_ordering(400) 00:11:11.882 fused_ordering(401) 00:11:11.882 fused_ordering(402) 00:11:11.882 fused_ordering(403) 00:11:11.882 fused_ordering(404) 00:11:11.882 fused_ordering(405) 00:11:11.882 fused_ordering(406) 00:11:11.882 fused_ordering(407) 00:11:11.882 fused_ordering(408) 00:11:11.882 fused_ordering(409) 00:11:11.882 fused_ordering(410) 00:11:12.448 fused_ordering(411) 00:11:12.448 fused_ordering(412) 00:11:12.448 fused_ordering(413) 00:11:12.448 fused_ordering(414) 00:11:12.448 fused_ordering(415) 00:11:12.448 fused_ordering(416) 00:11:12.448 fused_ordering(417) 00:11:12.448 fused_ordering(418) 00:11:12.448 fused_ordering(419) 00:11:12.448 fused_ordering(420) 00:11:12.448 fused_ordering(421) 00:11:12.448 fused_ordering(422) 00:11:12.448 fused_ordering(423) 00:11:12.448 fused_ordering(424) 00:11:12.448 fused_ordering(425) 00:11:12.448 fused_ordering(426) 00:11:12.448 fused_ordering(427) 00:11:12.448 fused_ordering(428) 00:11:12.448 fused_ordering(429) 00:11:12.448 fused_ordering(430) 00:11:12.448 fused_ordering(431) 00:11:12.448 fused_ordering(432) 00:11:12.448 fused_ordering(433) 00:11:12.449 fused_ordering(434) 00:11:12.449 fused_ordering(435) 00:11:12.449 fused_ordering(436) 00:11:12.449 fused_ordering(437) 00:11:12.449 fused_ordering(438) 00:11:12.449 fused_ordering(439) 00:11:12.449 fused_ordering(440) 00:11:12.449 fused_ordering(441) 00:11:12.449 fused_ordering(442) 00:11:12.449 fused_ordering(443) 00:11:12.449 fused_ordering(444) 00:11:12.449 fused_ordering(445) 00:11:12.449 fused_ordering(446) 00:11:12.449 fused_ordering(447) 00:11:12.449 fused_ordering(448) 00:11:12.449 fused_ordering(449) 00:11:12.449 fused_ordering(450) 00:11:12.449 fused_ordering(451) 00:11:12.449 fused_ordering(452) 00:11:12.449 fused_ordering(453) 00:11:12.449 fused_ordering(454) 00:11:12.449 fused_ordering(455) 00:11:12.449 fused_ordering(456) 00:11:12.449 fused_ordering(457) 00:11:12.449 fused_ordering(458) 00:11:12.449 fused_ordering(459) 00:11:12.449 fused_ordering(460) 00:11:12.449 fused_ordering(461) 00:11:12.449 fused_ordering(462) 00:11:12.449 fused_ordering(463) 00:11:12.449 fused_ordering(464) 00:11:12.449 fused_ordering(465) 00:11:12.449 fused_ordering(466) 00:11:12.449 fused_ordering(467) 00:11:12.449 fused_ordering(468) 00:11:12.449 fused_ordering(469) 00:11:12.449 fused_ordering(470) 00:11:12.449 fused_ordering(471) 00:11:12.449 fused_ordering(472) 00:11:12.449 fused_ordering(473) 00:11:12.449 fused_ordering(474) 00:11:12.449 fused_ordering(475) 00:11:12.449 fused_ordering(476) 00:11:12.449 fused_ordering(477) 00:11:12.449 fused_ordering(478) 00:11:12.449 fused_ordering(479) 00:11:12.449 fused_ordering(480) 00:11:12.449 fused_ordering(481) 00:11:12.449 fused_ordering(482) 00:11:12.449 fused_ordering(483) 00:11:12.449 fused_ordering(484) 00:11:12.449 fused_ordering(485) 00:11:12.449 fused_ordering(486) 00:11:12.449 fused_ordering(487) 00:11:12.449 fused_ordering(488) 00:11:12.449 fused_ordering(489) 00:11:12.449 fused_ordering(490) 00:11:12.449 fused_ordering(491) 00:11:12.449 fused_ordering(492) 00:11:12.449 fused_ordering(493) 00:11:12.449 fused_ordering(494) 00:11:12.449 fused_ordering(495) 00:11:12.449 fused_ordering(496) 00:11:12.449 fused_ordering(497) 00:11:12.449 fused_ordering(498) 00:11:12.449 fused_ordering(499) 00:11:12.449 fused_ordering(500) 00:11:12.449 fused_ordering(501) 00:11:12.449 fused_ordering(502) 00:11:12.449 fused_ordering(503) 00:11:12.449 fused_ordering(504) 00:11:12.449 fused_ordering(505) 00:11:12.449 fused_ordering(506) 00:11:12.449 fused_ordering(507) 00:11:12.449 fused_ordering(508) 00:11:12.449 fused_ordering(509) 00:11:12.449 fused_ordering(510) 00:11:12.449 fused_ordering(511) 00:11:12.449 fused_ordering(512) 00:11:12.449 fused_ordering(513) 00:11:12.449 fused_ordering(514) 00:11:12.449 fused_ordering(515) 00:11:12.449 fused_ordering(516) 00:11:12.449 fused_ordering(517) 00:11:12.449 fused_ordering(518) 00:11:12.449 fused_ordering(519) 00:11:12.449 fused_ordering(520) 00:11:12.449 fused_ordering(521) 00:11:12.449 fused_ordering(522) 00:11:12.449 fused_ordering(523) 00:11:12.449 fused_ordering(524) 00:11:12.449 fused_ordering(525) 00:11:12.449 fused_ordering(526) 00:11:12.449 fused_ordering(527) 00:11:12.449 fused_ordering(528) 00:11:12.449 fused_ordering(529) 00:11:12.449 fused_ordering(530) 00:11:12.449 fused_ordering(531) 00:11:12.449 fused_ordering(532) 00:11:12.449 fused_ordering(533) 00:11:12.449 fused_ordering(534) 00:11:12.449 fused_ordering(535) 00:11:12.449 fused_ordering(536) 00:11:12.449 fused_ordering(537) 00:11:12.449 fused_ordering(538) 00:11:12.449 fused_ordering(539) 00:11:12.449 fused_ordering(540) 00:11:12.449 fused_ordering(541) 00:11:12.449 fused_ordering(542) 00:11:12.449 fused_ordering(543) 00:11:12.449 fused_ordering(544) 00:11:12.449 fused_ordering(545) 00:11:12.449 fused_ordering(546) 00:11:12.449 fused_ordering(547) 00:11:12.449 fused_ordering(548) 00:11:12.449 fused_ordering(549) 00:11:12.449 fused_ordering(550) 00:11:12.449 fused_ordering(551) 00:11:12.449 fused_ordering(552) 00:11:12.449 fused_ordering(553) 00:11:12.449 fused_ordering(554) 00:11:12.449 fused_ordering(555) 00:11:12.449 fused_ordering(556) 00:11:12.449 fused_ordering(557) 00:11:12.449 fused_ordering(558) 00:11:12.449 fused_ordering(559) 00:11:12.449 fused_ordering(560) 00:11:12.449 fused_ordering(561) 00:11:12.449 fused_ordering(562) 00:11:12.449 fused_ordering(563) 00:11:12.449 fused_ordering(564) 00:11:12.449 fused_ordering(565) 00:11:12.449 fused_ordering(566) 00:11:12.449 fused_ordering(567) 00:11:12.449 fused_ordering(568) 00:11:12.449 fused_ordering(569) 00:11:12.449 fused_ordering(570) 00:11:12.449 fused_ordering(571) 00:11:12.449 fused_ordering(572) 00:11:12.449 fused_ordering(573) 00:11:12.449 fused_ordering(574) 00:11:12.449 fused_ordering(575) 00:11:12.449 fused_ordering(576) 00:11:12.449 fused_ordering(577) 00:11:12.449 fused_ordering(578) 00:11:12.449 fused_ordering(579) 00:11:12.449 fused_ordering(580) 00:11:12.449 fused_ordering(581) 00:11:12.449 fused_ordering(582) 00:11:12.449 fused_ordering(583) 00:11:12.449 fused_ordering(584) 00:11:12.449 fused_ordering(585) 00:11:12.449 fused_ordering(586) 00:11:12.449 fused_ordering(587) 00:11:12.449 fused_ordering(588) 00:11:12.449 fused_ordering(589) 00:11:12.449 fused_ordering(590) 00:11:12.449 fused_ordering(591) 00:11:12.449 fused_ordering(592) 00:11:12.449 fused_ordering(593) 00:11:12.449 fused_ordering(594) 00:11:12.449 fused_ordering(595) 00:11:12.449 fused_ordering(596) 00:11:12.449 fused_ordering(597) 00:11:12.449 fused_ordering(598) 00:11:12.449 fused_ordering(599) 00:11:12.449 fused_ordering(600) 00:11:12.449 fused_ordering(601) 00:11:12.449 fused_ordering(602) 00:11:12.449 fused_ordering(603) 00:11:12.449 fused_ordering(604) 00:11:12.449 fused_ordering(605) 00:11:12.449 fused_ordering(606) 00:11:12.449 fused_ordering(607) 00:11:12.449 fused_ordering(608) 00:11:12.449 fused_ordering(609) 00:11:12.449 fused_ordering(610) 00:11:12.449 fused_ordering(611) 00:11:12.449 fused_ordering(612) 00:11:12.449 fused_ordering(613) 00:11:12.449 fused_ordering(614) 00:11:12.449 fused_ordering(615) 00:11:13.383 fused_ordering(616) 00:11:13.383 fused_ordering(617) 00:11:13.383 fused_ordering(618) 00:11:13.383 fused_ordering(619) 00:11:13.383 fused_ordering(620) 00:11:13.383 fused_ordering(621) 00:11:13.383 fused_ordering(622) 00:11:13.383 fused_ordering(623) 00:11:13.383 fused_ordering(624) 00:11:13.383 fused_ordering(625) 00:11:13.383 fused_ordering(626) 00:11:13.383 fused_ordering(627) 00:11:13.383 fused_ordering(628) 00:11:13.383 fused_ordering(629) 00:11:13.383 fused_ordering(630) 00:11:13.383 fused_ordering(631) 00:11:13.383 fused_ordering(632) 00:11:13.383 fused_ordering(633) 00:11:13.383 fused_ordering(634) 00:11:13.383 fused_ordering(635) 00:11:13.383 fused_ordering(636) 00:11:13.383 fused_ordering(637) 00:11:13.383 fused_ordering(638) 00:11:13.383 fused_ordering(639) 00:11:13.383 fused_ordering(640) 00:11:13.383 fused_ordering(641) 00:11:13.383 fused_ordering(642) 00:11:13.383 fused_ordering(643) 00:11:13.383 fused_ordering(644) 00:11:13.383 fused_ordering(645) 00:11:13.383 fused_ordering(646) 00:11:13.383 fused_ordering(647) 00:11:13.383 fused_ordering(648) 00:11:13.383 fused_ordering(649) 00:11:13.383 fused_ordering(650) 00:11:13.383 fused_ordering(651) 00:11:13.383 fused_ordering(652) 00:11:13.383 fused_ordering(653) 00:11:13.383 fused_ordering(654) 00:11:13.383 fused_ordering(655) 00:11:13.383 fused_ordering(656) 00:11:13.383 fused_ordering(657) 00:11:13.383 fused_ordering(658) 00:11:13.383 fused_ordering(659) 00:11:13.383 fused_ordering(660) 00:11:13.383 fused_ordering(661) 00:11:13.383 fused_ordering(662) 00:11:13.383 fused_ordering(663) 00:11:13.383 fused_ordering(664) 00:11:13.383 fused_ordering(665) 00:11:13.383 fused_ordering(666) 00:11:13.383 fused_ordering(667) 00:11:13.383 fused_ordering(668) 00:11:13.383 fused_ordering(669) 00:11:13.383 fused_ordering(670) 00:11:13.383 fused_ordering(671) 00:11:13.383 fused_ordering(672) 00:11:13.383 fused_ordering(673) 00:11:13.383 fused_ordering(674) 00:11:13.383 fused_ordering(675) 00:11:13.383 fused_ordering(676) 00:11:13.383 fused_ordering(677) 00:11:13.383 fused_ordering(678) 00:11:13.383 fused_ordering(679) 00:11:13.383 fused_ordering(680) 00:11:13.383 fused_ordering(681) 00:11:13.383 fused_ordering(682) 00:11:13.383 fused_ordering(683) 00:11:13.383 fused_ordering(684) 00:11:13.383 fused_ordering(685) 00:11:13.383 fused_ordering(686) 00:11:13.383 fused_ordering(687) 00:11:13.383 fused_ordering(688) 00:11:13.383 fused_ordering(689) 00:11:13.383 fused_ordering(690) 00:11:13.383 fused_ordering(691) 00:11:13.383 fused_ordering(692) 00:11:13.384 fused_ordering(693) 00:11:13.384 fused_ordering(694) 00:11:13.384 fused_ordering(695) 00:11:13.384 fused_ordering(696) 00:11:13.384 fused_ordering(697) 00:11:13.384 fused_ordering(698) 00:11:13.384 fused_ordering(699) 00:11:13.384 fused_ordering(700) 00:11:13.384 fused_ordering(701) 00:11:13.384 fused_ordering(702) 00:11:13.384 fused_ordering(703) 00:11:13.384 fused_ordering(704) 00:11:13.384 fused_ordering(705) 00:11:13.384 fused_ordering(706) 00:11:13.384 fused_ordering(707) 00:11:13.384 fused_ordering(708) 00:11:13.384 fused_ordering(709) 00:11:13.384 fused_ordering(710) 00:11:13.384 fused_ordering(711) 00:11:13.384 fused_ordering(712) 00:11:13.384 fused_ordering(713) 00:11:13.384 fused_ordering(714) 00:11:13.384 fused_ordering(715) 00:11:13.384 fused_ordering(716) 00:11:13.384 fused_ordering(717) 00:11:13.384 fused_ordering(718) 00:11:13.384 fused_ordering(719) 00:11:13.384 fused_ordering(720) 00:11:13.384 fused_ordering(721) 00:11:13.384 fused_ordering(722) 00:11:13.384 fused_ordering(723) 00:11:13.384 fused_ordering(724) 00:11:13.384 fused_ordering(725) 00:11:13.384 fused_ordering(726) 00:11:13.384 fused_ordering(727) 00:11:13.384 fused_ordering(728) 00:11:13.384 fused_ordering(729) 00:11:13.384 fused_ordering(730) 00:11:13.384 fused_ordering(731) 00:11:13.384 fused_ordering(732) 00:11:13.384 fused_ordering(733) 00:11:13.384 fused_ordering(734) 00:11:13.384 fused_ordering(735) 00:11:13.384 fused_ordering(736) 00:11:13.384 fused_ordering(737) 00:11:13.384 fused_ordering(738) 00:11:13.384 fused_ordering(739) 00:11:13.384 fused_ordering(740) 00:11:13.384 fused_ordering(741) 00:11:13.384 fused_ordering(742) 00:11:13.384 fused_ordering(743) 00:11:13.384 fused_ordering(744) 00:11:13.384 fused_ordering(745) 00:11:13.384 fused_ordering(746) 00:11:13.384 fused_ordering(747) 00:11:13.384 fused_ordering(748) 00:11:13.384 fused_ordering(749) 00:11:13.384 fused_ordering(750) 00:11:13.384 fused_ordering(751) 00:11:13.384 fused_ordering(752) 00:11:13.384 fused_ordering(753) 00:11:13.384 fused_ordering(754) 00:11:13.384 fused_ordering(755) 00:11:13.384 fused_ordering(756) 00:11:13.384 fused_ordering(757) 00:11:13.384 fused_ordering(758) 00:11:13.384 fused_ordering(759) 00:11:13.384 fused_ordering(760) 00:11:13.384 fused_ordering(761) 00:11:13.384 fused_ordering(762) 00:11:13.384 fused_ordering(763) 00:11:13.384 fused_ordering(764) 00:11:13.384 fused_ordering(765) 00:11:13.384 fused_ordering(766) 00:11:13.384 fused_ordering(767) 00:11:13.384 fused_ordering(768) 00:11:13.384 fused_ordering(769) 00:11:13.384 fused_ordering(770) 00:11:13.384 fused_ordering(771) 00:11:13.384 fused_ordering(772) 00:11:13.384 fused_ordering(773) 00:11:13.384 fused_ordering(774) 00:11:13.384 fused_ordering(775) 00:11:13.384 fused_ordering(776) 00:11:13.384 fused_ordering(777) 00:11:13.384 fused_ordering(778) 00:11:13.384 fused_ordering(779) 00:11:13.384 fused_ordering(780) 00:11:13.384 fused_ordering(781) 00:11:13.384 fused_ordering(782) 00:11:13.384 fused_ordering(783) 00:11:13.384 fused_ordering(784) 00:11:13.384 fused_ordering(785) 00:11:13.384 fused_ordering(786) 00:11:13.384 fused_ordering(787) 00:11:13.384 fused_ordering(788) 00:11:13.384 fused_ordering(789) 00:11:13.384 fused_ordering(790) 00:11:13.384 fused_ordering(791) 00:11:13.384 fused_ordering(792) 00:11:13.384 fused_ordering(793) 00:11:13.384 fused_ordering(794) 00:11:13.384 fused_ordering(795) 00:11:13.384 fused_ordering(796) 00:11:13.384 fused_ordering(797) 00:11:13.384 fused_ordering(798) 00:11:13.384 fused_ordering(799) 00:11:13.384 fused_ordering(800) 00:11:13.384 fused_ordering(801) 00:11:13.384 fused_ordering(802) 00:11:13.384 fused_ordering(803) 00:11:13.384 fused_ordering(804) 00:11:13.384 fused_ordering(805) 00:11:13.384 fused_ordering(806) 00:11:13.384 fused_ordering(807) 00:11:13.384 fused_ordering(808) 00:11:13.384 fused_ordering(809) 00:11:13.384 fused_ordering(810) 00:11:13.384 fused_ordering(811) 00:11:13.384 fused_ordering(812) 00:11:13.384 fused_ordering(813) 00:11:13.384 fused_ordering(814) 00:11:13.384 fused_ordering(815) 00:11:13.384 fused_ordering(816) 00:11:13.384 fused_ordering(817) 00:11:13.384 fused_ordering(818) 00:11:13.384 fused_ordering(819) 00:11:13.384 fused_ordering(820) 00:11:13.950 fused_ordering(821) 00:11:13.950 fused_ordering(822) 00:11:13.950 fused_ordering(823) 00:11:13.950 fused_ordering(824) 00:11:13.950 fused_ordering(825) 00:11:13.950 fused_ordering(826) 00:11:13.950 fused_ordering(827) 00:11:13.950 fused_ordering(828) 00:11:13.950 fused_ordering(829) 00:11:13.950 fused_ordering(830) 00:11:13.950 fused_ordering(831) 00:11:13.950 fused_ordering(832) 00:11:13.950 fused_ordering(833) 00:11:13.950 fused_ordering(834) 00:11:13.950 fused_ordering(835) 00:11:13.950 fused_ordering(836) 00:11:13.950 fused_ordering(837) 00:11:13.950 fused_ordering(838) 00:11:13.950 fused_ordering(839) 00:11:13.950 fused_ordering(840) 00:11:13.950 fused_ordering(841) 00:11:13.950 fused_ordering(842) 00:11:13.950 fused_ordering(843) 00:11:13.950 fused_ordering(844) 00:11:13.950 fused_ordering(845) 00:11:13.950 fused_ordering(846) 00:11:13.950 fused_ordering(847) 00:11:13.950 fused_ordering(848) 00:11:13.950 fused_ordering(849) 00:11:13.950 fused_ordering(850) 00:11:13.950 fused_ordering(851) 00:11:13.950 fused_ordering(852) 00:11:13.950 fused_ordering(853) 00:11:13.950 fused_ordering(854) 00:11:13.950 fused_ordering(855) 00:11:13.950 fused_ordering(856) 00:11:13.950 fused_ordering(857) 00:11:13.950 fused_ordering(858) 00:11:13.950 fused_ordering(859) 00:11:13.950 fused_ordering(860) 00:11:13.950 fused_ordering(861) 00:11:13.950 fused_ordering(862) 00:11:13.950 fused_ordering(863) 00:11:13.950 fused_ordering(864) 00:11:13.950 fused_ordering(865) 00:11:13.950 fused_ordering(866) 00:11:13.950 fused_ordering(867) 00:11:13.950 fused_ordering(868) 00:11:13.950 fused_ordering(869) 00:11:13.950 fused_ordering(870) 00:11:13.950 fused_ordering(871) 00:11:13.950 fused_ordering(872) 00:11:13.950 fused_ordering(873) 00:11:13.950 fused_ordering(874) 00:11:13.950 fused_ordering(875) 00:11:13.950 fused_ordering(876) 00:11:13.950 fused_ordering(877) 00:11:13.950 fused_ordering(878) 00:11:13.950 fused_ordering(879) 00:11:13.950 fused_ordering(880) 00:11:13.950 fused_ordering(881) 00:11:13.950 fused_ordering(882) 00:11:13.950 fused_ordering(883) 00:11:13.950 fused_ordering(884) 00:11:13.950 fused_ordering(885) 00:11:13.950 fused_ordering(886) 00:11:13.950 fused_ordering(887) 00:11:13.950 fused_ordering(888) 00:11:13.950 fused_ordering(889) 00:11:13.950 fused_ordering(890) 00:11:13.950 fused_ordering(891) 00:11:13.950 fused_ordering(892) 00:11:13.950 fused_ordering(893) 00:11:13.950 fused_ordering(894) 00:11:13.950 fused_ordering(895) 00:11:13.950 fused_ordering(896) 00:11:13.950 fused_ordering(897) 00:11:13.950 fused_ordering(898) 00:11:13.950 fused_ordering(899) 00:11:13.950 fused_ordering(900) 00:11:13.950 fused_ordering(901) 00:11:13.950 fused_ordering(902) 00:11:13.950 fused_ordering(903) 00:11:13.950 fused_ordering(904) 00:11:13.950 fused_ordering(905) 00:11:13.950 fused_ordering(906) 00:11:13.950 fused_ordering(907) 00:11:13.950 fused_ordering(908) 00:11:13.950 fused_ordering(909) 00:11:13.950 fused_ordering(910) 00:11:13.950 fused_ordering(911) 00:11:13.950 fused_ordering(912) 00:11:13.950 fused_ordering(913) 00:11:13.950 fused_ordering(914) 00:11:13.950 fused_ordering(915) 00:11:13.950 fused_ordering(916) 00:11:13.950 fused_ordering(917) 00:11:13.950 fused_ordering(918) 00:11:13.950 fused_ordering(919) 00:11:13.950 fused_ordering(920) 00:11:13.950 fused_ordering(921) 00:11:13.950 fused_ordering(922) 00:11:13.950 fused_ordering(923) 00:11:13.950 fused_ordering(924) 00:11:13.950 fused_ordering(925) 00:11:13.950 fused_ordering(926) 00:11:13.950 fused_ordering(927) 00:11:13.950 fused_ordering(928) 00:11:13.950 fused_ordering(929) 00:11:13.950 fused_ordering(930) 00:11:13.951 fused_ordering(931) 00:11:13.951 fused_ordering(932) 00:11:13.951 fused_ordering(933) 00:11:13.951 fused_ordering(934) 00:11:13.951 fused_ordering(935) 00:11:13.951 fused_ordering(936) 00:11:13.951 fused_ordering(937) 00:11:13.951 fused_ordering(938) 00:11:13.951 fused_ordering(939) 00:11:13.951 fused_ordering(940) 00:11:13.951 fused_ordering(941) 00:11:13.951 fused_ordering(942) 00:11:13.951 fused_ordering(943) 00:11:13.951 fused_ordering(944) 00:11:13.951 fused_ordering(945) 00:11:13.951 fused_ordering(946) 00:11:13.951 fused_ordering(947) 00:11:13.951 fused_ordering(948) 00:11:13.951 fused_ordering(949) 00:11:13.951 fused_ordering(950) 00:11:13.951 fused_ordering(951) 00:11:13.951 fused_ordering(952) 00:11:13.951 fused_ordering(953) 00:11:13.951 fused_ordering(954) 00:11:13.951 fused_ordering(955) 00:11:13.951 fused_ordering(956) 00:11:13.951 fused_ordering(957) 00:11:13.951 fused_ordering(958) 00:11:13.951 fused_ordering(959) 00:11:13.951 fused_ordering(960) 00:11:13.951 fused_ordering(961) 00:11:13.951 fused_ordering(962) 00:11:13.951 fused_ordering(963) 00:11:13.951 fused_ordering(964) 00:11:13.951 fused_ordering(965) 00:11:13.951 fused_ordering(966) 00:11:13.951 fused_ordering(967) 00:11:13.951 fused_ordering(968) 00:11:13.951 fused_ordering(969) 00:11:13.951 fused_ordering(970) 00:11:13.951 fused_ordering(971) 00:11:13.951 fused_ordering(972) 00:11:13.951 fused_ordering(973) 00:11:13.951 fused_ordering(974) 00:11:13.951 fused_ordering(975) 00:11:13.951 fused_ordering(976) 00:11:13.951 fused_ordering(977) 00:11:13.951 fused_ordering(978) 00:11:13.951 fused_ordering(979) 00:11:13.951 fused_ordering(980) 00:11:13.951 fused_ordering(981) 00:11:13.951 fused_ordering(982) 00:11:13.951 fused_ordering(983) 00:11:13.951 fused_ordering(984) 00:11:13.951 fused_ordering(985) 00:11:13.951 fused_ordering(986) 00:11:13.951 fused_ordering(987) 00:11:13.951 fused_ordering(988) 00:11:13.951 fused_ordering(989) 00:11:13.951 fused_ordering(990) 00:11:13.951 fused_ordering(991) 00:11:13.951 fused_ordering(992) 00:11:13.951 fused_ordering(993) 00:11:13.951 fused_ordering(994) 00:11:13.951 fused_ordering(995) 00:11:13.951 fused_ordering(996) 00:11:13.951 fused_ordering(997) 00:11:13.951 fused_ordering(998) 00:11:13.951 fused_ordering(999) 00:11:13.951 fused_ordering(1000) 00:11:13.951 fused_ordering(1001) 00:11:13.951 fused_ordering(1002) 00:11:13.951 fused_ordering(1003) 00:11:13.951 fused_ordering(1004) 00:11:13.951 fused_ordering(1005) 00:11:13.951 fused_ordering(1006) 00:11:13.951 fused_ordering(1007) 00:11:13.951 fused_ordering(1008) 00:11:13.951 fused_ordering(1009) 00:11:13.951 fused_ordering(1010) 00:11:13.951 fused_ordering(1011) 00:11:13.951 fused_ordering(1012) 00:11:13.951 fused_ordering(1013) 00:11:13.951 fused_ordering(1014) 00:11:13.951 fused_ordering(1015) 00:11:13.951 fused_ordering(1016) 00:11:13.951 fused_ordering(1017) 00:11:13.951 fused_ordering(1018) 00:11:13.951 fused_ordering(1019) 00:11:13.951 fused_ordering(1020) 00:11:13.951 fused_ordering(1021) 00:11:13.951 fused_ordering(1022) 00:11:13.951 fused_ordering(1023) 00:11:13.951 22:02:56 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:13.951 22:02:56 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:14.210 22:02:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:14.210 22:02:56 -- nvmf/common.sh@117 -- # sync 00:11:14.210 22:02:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:14.210 22:02:56 -- nvmf/common.sh@120 -- # set +e 00:11:14.210 22:02:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:14.210 22:02:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:14.210 rmmod nvme_tcp 00:11:14.210 rmmod nvme_fabrics 00:11:14.210 rmmod nvme_keyring 00:11:14.210 22:02:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:14.210 22:02:56 -- nvmf/common.sh@124 -- # set -e 00:11:14.210 22:02:56 -- nvmf/common.sh@125 -- # return 0 00:11:14.210 22:02:56 -- nvmf/common.sh@478 -- # '[' -n 3892544 ']' 00:11:14.210 22:02:56 -- nvmf/common.sh@479 -- # killprocess 3892544 00:11:14.210 22:02:56 -- common/autotest_common.sh@936 -- # '[' -z 3892544 ']' 00:11:14.210 22:02:56 -- common/autotest_common.sh@940 -- # kill -0 3892544 00:11:14.210 22:02:56 -- common/autotest_common.sh@941 -- # uname 00:11:14.210 22:02:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:14.210 22:02:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3892544 00:11:14.210 22:02:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:14.210 22:02:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:14.210 22:02:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3892544' 00:11:14.210 killing process with pid 3892544 00:11:14.210 22:02:56 -- common/autotest_common.sh@955 -- # kill 3892544 00:11:14.210 [2024-04-24 22:02:56.289731] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:14.210 22:02:56 -- common/autotest_common.sh@960 -- # wait 3892544 00:11:14.468 22:02:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:14.468 22:02:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:14.468 22:02:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:14.468 22:02:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:14.468 22:02:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:14.468 22:02:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.468 22:02:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:14.468 22:02:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.367 22:02:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:16.367 00:11:16.367 real 0m8.559s 00:11:16.367 user 0m5.792s 00:11:16.367 sys 0m4.171s 00:11:16.367 22:02:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:16.367 22:02:58 -- common/autotest_common.sh@10 -- # set +x 00:11:16.367 ************************************ 00:11:16.367 END TEST nvmf_fused_ordering 00:11:16.367 ************************************ 00:11:16.626 22:02:58 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:16.626 22:02:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:16.626 22:02:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:16.626 22:02:58 -- common/autotest_common.sh@10 -- # set +x 00:11:16.626 ************************************ 00:11:16.626 START TEST nvmf_delete_subsystem 00:11:16.626 ************************************ 00:11:16.626 22:02:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:16.626 * Looking for test storage... 00:11:16.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.626 22:02:58 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:16.626 22:02:58 -- nvmf/common.sh@7 -- # uname -s 00:11:16.626 22:02:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.626 22:02:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.626 22:02:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.626 22:02:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.626 22:02:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.626 22:02:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.626 22:02:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.626 22:02:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.626 22:02:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.626 22:02:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.626 22:02:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:16.626 22:02:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:16.626 22:02:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.626 22:02:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.626 22:02:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:16.626 22:02:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:16.626 22:02:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:16.626 22:02:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.626 22:02:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.626 22:02:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.626 22:02:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.626 22:02:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.626 22:02:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.626 22:02:58 -- paths/export.sh@5 -- # export PATH 00:11:16.626 22:02:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.626 22:02:58 -- nvmf/common.sh@47 -- # : 0 00:11:16.626 22:02:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:16.626 22:02:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:16.626 22:02:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:16.626 22:02:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.626 22:02:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.626 22:02:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:16.626 22:02:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:16.626 22:02:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:16.626 22:02:58 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:16.626 22:02:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:16.626 22:02:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:16.626 22:02:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:16.626 22:02:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:16.626 22:02:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:16.626 22:02:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.626 22:02:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:16.626 22:02:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.626 22:02:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:16.626 22:02:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:16.626 22:02:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:16.626 22:02:58 -- common/autotest_common.sh@10 -- # set +x 00:11:19.174 22:03:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:19.174 22:03:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:19.174 22:03:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:19.174 22:03:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:19.174 22:03:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:19.174 22:03:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:19.174 22:03:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:19.174 22:03:01 -- nvmf/common.sh@295 -- # net_devs=() 00:11:19.174 22:03:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:19.174 22:03:01 -- nvmf/common.sh@296 -- # e810=() 00:11:19.174 22:03:01 -- nvmf/common.sh@296 -- # local -ga e810 00:11:19.174 22:03:01 -- nvmf/common.sh@297 -- # x722=() 00:11:19.174 22:03:01 -- nvmf/common.sh@297 -- # local -ga x722 00:11:19.174 22:03:01 -- nvmf/common.sh@298 -- # mlx=() 00:11:19.174 22:03:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:19.174 22:03:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:19.174 22:03:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:19.174 22:03:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:19.174 22:03:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:19.174 22:03:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:19.174 22:03:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:19.174 22:03:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:19.174 22:03:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:19.174 22:03:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:19.174 22:03:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:19.174 22:03:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:19.174 22:03:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:19.174 22:03:01 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:19.174 22:03:01 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:19.174 22:03:01 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:19.174 22:03:01 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:19.174 22:03:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:19.174 22:03:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:19.174 22:03:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:19.174 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:19.174 22:03:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:19.174 22:03:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:19.174 22:03:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.174 22:03:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.174 22:03:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:19.174 22:03:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:19.174 22:03:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:19.174 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:19.174 22:03:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:19.174 22:03:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:19.174 22:03:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.174 22:03:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.174 22:03:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:19.174 22:03:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:19.174 22:03:01 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:19.174 22:03:01 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:19.174 22:03:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:19.174 22:03:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.174 22:03:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:19.174 22:03:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.174 22:03:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:19.174 Found net devices under 0000:84:00.0: cvl_0_0 00:11:19.174 22:03:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.174 22:03:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:19.174 22:03:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.174 22:03:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:19.174 22:03:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.174 22:03:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:19.174 Found net devices under 0000:84:00.1: cvl_0_1 00:11:19.174 22:03:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.174 22:03:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:19.174 22:03:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:19.174 22:03:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:19.174 22:03:01 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:19.174 22:03:01 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:19.174 22:03:01 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:19.174 22:03:01 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:19.174 22:03:01 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:19.174 22:03:01 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:19.174 22:03:01 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:19.174 22:03:01 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:19.174 22:03:01 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:19.174 22:03:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:19.174 22:03:01 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:19.174 22:03:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:19.174 22:03:01 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:19.174 22:03:01 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:19.174 22:03:01 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:19.174 22:03:01 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:19.174 22:03:01 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:19.174 22:03:01 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:19.174 22:03:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:19.174 22:03:01 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:19.174 22:03:01 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:19.174 22:03:01 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:19.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:19.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:11:19.174 00:11:19.174 --- 10.0.0.2 ping statistics --- 00:11:19.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.174 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:11:19.174 22:03:01 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:19.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:19.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:11:19.174 00:11:19.174 --- 10.0.0.1 ping statistics --- 00:11:19.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.174 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:11:19.174 22:03:01 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:19.174 22:03:01 -- nvmf/common.sh@411 -- # return 0 00:11:19.174 22:03:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:19.174 22:03:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:19.174 22:03:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:19.174 22:03:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:19.174 22:03:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:19.174 22:03:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:19.174 22:03:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:19.174 22:03:01 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:19.174 22:03:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:19.174 22:03:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:19.174 22:03:01 -- common/autotest_common.sh@10 -- # set +x 00:11:19.174 22:03:01 -- nvmf/common.sh@470 -- # nvmfpid=3895099 00:11:19.174 22:03:01 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:19.174 22:03:01 -- nvmf/common.sh@471 -- # waitforlisten 3895099 00:11:19.174 22:03:01 -- common/autotest_common.sh@817 -- # '[' -z 3895099 ']' 00:11:19.174 22:03:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.174 22:03:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:19.174 22:03:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.174 22:03:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:19.174 22:03:01 -- common/autotest_common.sh@10 -- # set +x 00:11:19.174 [2024-04-24 22:03:01.359222] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:11:19.174 [2024-04-24 22:03:01.359313] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.174 EAL: No free 2048 kB hugepages reported on node 1 00:11:19.433 [2024-04-24 22:03:01.439529] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:19.433 [2024-04-24 22:03:01.560197] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:19.433 [2024-04-24 22:03:01.560258] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:19.433 [2024-04-24 22:03:01.560275] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:19.433 [2024-04-24 22:03:01.560289] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:19.433 [2024-04-24 22:03:01.560301] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:19.433 [2024-04-24 22:03:01.562419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.433 [2024-04-24 22:03:01.562431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.433 22:03:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:19.433 22:03:01 -- common/autotest_common.sh@850 -- # return 0 00:11:19.433 22:03:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:19.433 22:03:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:19.433 22:03:01 -- common/autotest_common.sh@10 -- # set +x 00:11:19.691 22:03:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.691 22:03:01 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:19.691 22:03:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:19.691 22:03:01 -- common/autotest_common.sh@10 -- # set +x 00:11:19.691 [2024-04-24 22:03:01.715859] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.691 22:03:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:19.691 22:03:01 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:19.691 22:03:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:19.691 22:03:01 -- common/autotest_common.sh@10 -- # set +x 00:11:19.691 22:03:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:19.691 22:03:01 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:19.691 22:03:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:19.691 22:03:01 -- common/autotest_common.sh@10 -- # set +x 00:11:19.691 [2024-04-24 22:03:01.731829] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:19.691 [2024-04-24 22:03:01.732149] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.691 22:03:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:19.691 22:03:01 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:19.691 22:03:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:19.691 22:03:01 -- common/autotest_common.sh@10 -- # set +x 00:11:19.691 NULL1 00:11:19.691 22:03:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:19.691 22:03:01 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:19.691 22:03:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:19.691 22:03:01 -- common/autotest_common.sh@10 -- # set +x 00:11:19.691 Delay0 00:11:19.691 22:03:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:19.691 22:03:01 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:19.691 22:03:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:19.691 22:03:01 -- common/autotest_common.sh@10 -- # set +x 00:11:19.691 22:03:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:19.691 22:03:01 -- target/delete_subsystem.sh@28 -- # perf_pid=3895151 00:11:19.691 22:03:01 -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:19.691 22:03:01 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:19.691 EAL: No free 2048 kB hugepages reported on node 1 00:11:19.691 [2024-04-24 22:03:01.816854] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:21.588 22:03:03 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:21.588 22:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:21.588 22:03:03 -- common/autotest_common.sh@10 -- # set +x 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 starting I/O failed: -6 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Write completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Write completed with error (sct=0, sc=8) 00:11:21.847 starting I/O failed: -6 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Write completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 starting I/O failed: -6 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Write completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 starting I/O failed: -6 00:11:21.847 Write completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 starting I/O failed: -6 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 starting I/O failed: -6 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Write completed with error (sct=0, sc=8) 00:11:21.847 starting I/O failed: -6 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Write completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 starting I/O failed: -6 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Write completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 starting I/O failed: -6 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 starting I/O failed: -6 00:11:21.847 Write completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Write completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Write completed with error (sct=0, sc=8) 00:11:21.847 Write completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Write completed with error (sct=0, sc=8) 00:11:21.847 Write completed with error (sct=0, sc=8) 00:11:21.847 Write completed with error (sct=0, sc=8) 00:11:21.847 Write completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Write completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Write completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Write completed with error (sct=0, sc=8) 00:11:21.847 Write completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Write completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 starting I/O failed: -6 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Write completed with error (sct=0, sc=8) 00:11:21.847 Write completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Write completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Write completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 starting I/O failed: -6 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Write completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.847 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 starting I/O failed: -6 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Write completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 starting I/O failed: -6 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Write completed with error (sct=0, sc=8) 00:11:21.848 Write completed with error (sct=0, sc=8) 00:11:21.848 starting I/O failed: -6 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Write completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 starting I/O failed: -6 00:11:21.848 Write completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 starting I/O failed: -6 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Write completed with error (sct=0, sc=8) 00:11:21.848 Write completed with error (sct=0, sc=8) 00:11:21.848 Write completed with error (sct=0, sc=8) 00:11:21.848 starting I/O failed: -6 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Write completed with error (sct=0, sc=8) 00:11:21.848 starting I/O failed: -6 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Write completed with error (sct=0, sc=8) 00:11:21.848 Write completed with error (sct=0, sc=8) 00:11:21.848 starting I/O failed: -6 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 [2024-04-24 22:03:03.950933] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9d880 is same with the state(5) to be set 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Write completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Write completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Write completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Write completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Write completed with error (sct=0, sc=8) 00:11:21.848 Write completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Write completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Write completed with error (sct=0, sc=8) 00:11:21.848 Write completed with error (sct=0, sc=8) 00:11:21.848 Write completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Write completed with error (sct=0, sc=8) 00:11:21.848 Write completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Write completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Write completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Write completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Write completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Write completed with error (sct=0, sc=8) 00:11:21.848 Write completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:21.848 Read completed with error (sct=0, sc=8) 00:11:22.782 [2024-04-24 22:03:04.919069] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabc120 is same with the state(5) to be set 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Write completed with error (sct=0, sc=8) 00:11:22.782 Write completed with error (sct=0, sc=8) 00:11:22.782 Write completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Write completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Write completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Write completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Write completed with error (sct=0, sc=8) 00:11:22.782 [2024-04-24 22:03:04.947774] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9da10 is same with the state(5) to be set 00:11:22.782 Write completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Write completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Write completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Write completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Write completed with error (sct=0, sc=8) 00:11:22.782 Write completed with error (sct=0, sc=8) 00:11:22.782 [2024-04-24 22:03:04.947970] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9dd30 is same with the state(5) to be set 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Write completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Write completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Write completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 [2024-04-24 22:03:04.954435] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb48400bf90 is same with the state(5) to be set 00:11:22.782 Write completed with error (sct=0, sc=8) 00:11:22.782 Write completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Write completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 Read completed with error (sct=0, sc=8) 00:11:22.782 [2024-04-24 22:03:04.955075] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb48400c510 is same with the state(5) to be set 00:11:22.782 [2024-04-24 22:03:04.955581] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabc120 (9): Bad file descriptor 00:11:22.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:22.782 22:03:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:22.782 22:03:04 -- target/delete_subsystem.sh@34 -- # delay=0 00:11:22.782 22:03:04 -- target/delete_subsystem.sh@35 -- # kill -0 3895151 00:11:22.782 22:03:04 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:22.782 Initializing NVMe Controllers 00:11:22.782 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:22.782 Controller IO queue size 128, less than required. 00:11:22.782 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:22.782 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:22.782 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:22.782 Initialization complete. Launching workers. 00:11:22.782 ======================================================== 00:11:22.782 Latency(us) 00:11:22.782 Device Information : IOPS MiB/s Average min max 00:11:22.782 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.54 0.08 909302.13 455.86 1014521.74 00:11:22.782 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.66 0.07 947497.83 484.28 1048324.07 00:11:22.782 ======================================================== 00:11:22.782 Total : 314.20 0.15 927616.79 455.86 1048324.07 00:11:22.782 00:11:23.348 22:03:05 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:23.348 22:03:05 -- target/delete_subsystem.sh@35 -- # kill -0 3895151 00:11:23.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3895151) - No such process 00:11:23.348 22:03:05 -- target/delete_subsystem.sh@45 -- # NOT wait 3895151 00:11:23.348 22:03:05 -- common/autotest_common.sh@638 -- # local es=0 00:11:23.348 22:03:05 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 3895151 00:11:23.348 22:03:05 -- common/autotest_common.sh@626 -- # local arg=wait 00:11:23.348 22:03:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:23.348 22:03:05 -- common/autotest_common.sh@630 -- # type -t wait 00:11:23.348 22:03:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:23.348 22:03:05 -- common/autotest_common.sh@641 -- # wait 3895151 00:11:23.349 22:03:05 -- common/autotest_common.sh@641 -- # es=1 00:11:23.349 22:03:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:23.349 22:03:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:23.349 22:03:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:23.349 22:03:05 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:23.349 22:03:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:23.349 22:03:05 -- common/autotest_common.sh@10 -- # set +x 00:11:23.349 22:03:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:23.349 22:03:05 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:23.349 22:03:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:23.349 22:03:05 -- common/autotest_common.sh@10 -- # set +x 00:11:23.349 [2024-04-24 22:03:05.478538] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:23.349 22:03:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:23.349 22:03:05 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:23.349 22:03:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:23.349 22:03:05 -- common/autotest_common.sh@10 -- # set +x 00:11:23.349 22:03:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:23.349 22:03:05 -- target/delete_subsystem.sh@54 -- # perf_pid=3895634 00:11:23.349 22:03:05 -- target/delete_subsystem.sh@56 -- # delay=0 00:11:23.349 22:03:05 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:23.349 22:03:05 -- target/delete_subsystem.sh@57 -- # kill -0 3895634 00:11:23.349 22:03:05 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:23.349 EAL: No free 2048 kB hugepages reported on node 1 00:11:23.349 [2024-04-24 22:03:05.549421] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:23.914 22:03:05 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:23.914 22:03:05 -- target/delete_subsystem.sh@57 -- # kill -0 3895634 00:11:23.914 22:03:05 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:24.481 22:03:06 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:24.481 22:03:06 -- target/delete_subsystem.sh@57 -- # kill -0 3895634 00:11:24.481 22:03:06 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:25.047 22:03:06 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:25.047 22:03:07 -- target/delete_subsystem.sh@57 -- # kill -0 3895634 00:11:25.047 22:03:07 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:25.305 22:03:07 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:25.305 22:03:07 -- target/delete_subsystem.sh@57 -- # kill -0 3895634 00:11:25.305 22:03:07 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:25.870 22:03:08 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:25.870 22:03:08 -- target/delete_subsystem.sh@57 -- # kill -0 3895634 00:11:25.870 22:03:08 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:26.436 22:03:08 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:26.436 22:03:08 -- target/delete_subsystem.sh@57 -- # kill -0 3895634 00:11:26.436 22:03:08 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:26.693 Initializing NVMe Controllers 00:11:26.693 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:26.693 Controller IO queue size 128, less than required. 00:11:26.693 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:26.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:26.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:26.693 Initialization complete. Launching workers. 00:11:26.693 ======================================================== 00:11:26.693 Latency(us) 00:11:26.693 Device Information : IOPS MiB/s Average min max 00:11:26.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004177.05 1000222.91 1041263.52 00:11:26.694 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005645.57 1000247.86 1016085.34 00:11:26.694 ======================================================== 00:11:26.694 Total : 256.00 0.12 1004911.31 1000222.91 1041263.52 00:11:26.694 00:11:26.952 22:03:09 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:26.952 22:03:09 -- target/delete_subsystem.sh@57 -- # kill -0 3895634 00:11:26.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3895634) - No such process 00:11:26.953 22:03:09 -- target/delete_subsystem.sh@67 -- # wait 3895634 00:11:26.953 22:03:09 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:26.953 22:03:09 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:26.953 22:03:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:26.953 22:03:09 -- nvmf/common.sh@117 -- # sync 00:11:26.953 22:03:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:26.953 22:03:09 -- nvmf/common.sh@120 -- # set +e 00:11:26.953 22:03:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:26.953 22:03:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:26.953 rmmod nvme_tcp 00:11:26.953 rmmod nvme_fabrics 00:11:26.953 rmmod nvme_keyring 00:11:26.953 22:03:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:26.953 22:03:09 -- nvmf/common.sh@124 -- # set -e 00:11:26.953 22:03:09 -- nvmf/common.sh@125 -- # return 0 00:11:26.953 22:03:09 -- nvmf/common.sh@478 -- # '[' -n 3895099 ']' 00:11:26.953 22:03:09 -- nvmf/common.sh@479 -- # killprocess 3895099 00:11:26.953 22:03:09 -- common/autotest_common.sh@936 -- # '[' -z 3895099 ']' 00:11:26.953 22:03:09 -- common/autotest_common.sh@940 -- # kill -0 3895099 00:11:26.953 22:03:09 -- common/autotest_common.sh@941 -- # uname 00:11:26.953 22:03:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:26.953 22:03:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3895099 00:11:26.953 22:03:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:26.953 22:03:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:26.953 22:03:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3895099' 00:11:26.953 killing process with pid 3895099 00:11:26.953 22:03:09 -- common/autotest_common.sh@955 -- # kill 3895099 00:11:26.953 [2024-04-24 22:03:09.106399] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:26.953 22:03:09 -- common/autotest_common.sh@960 -- # wait 3895099 00:11:27.211 22:03:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:27.211 22:03:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:27.211 22:03:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:27.211 22:03:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:27.211 22:03:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:27.211 22:03:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.211 22:03:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:27.211 22:03:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.740 22:03:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:29.740 00:11:29.740 real 0m12.689s 00:11:29.740 user 0m27.915s 00:11:29.740 sys 0m3.278s 00:11:29.740 22:03:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:29.740 22:03:11 -- common/autotest_common.sh@10 -- # set +x 00:11:29.740 ************************************ 00:11:29.740 END TEST nvmf_delete_subsystem 00:11:29.740 ************************************ 00:11:29.740 22:03:11 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:29.740 22:03:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:29.740 22:03:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:29.740 22:03:11 -- common/autotest_common.sh@10 -- # set +x 00:11:29.740 ************************************ 00:11:29.740 START TEST nvmf_ns_masking 00:11:29.740 ************************************ 00:11:29.740 22:03:11 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:29.740 * Looking for test storage... 00:11:29.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.740 22:03:11 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.740 22:03:11 -- nvmf/common.sh@7 -- # uname -s 00:11:29.740 22:03:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.740 22:03:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.740 22:03:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.740 22:03:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.740 22:03:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.740 22:03:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.740 22:03:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.740 22:03:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.740 22:03:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.740 22:03:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.740 22:03:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:29.740 22:03:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:29.740 22:03:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.740 22:03:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.740 22:03:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.740 22:03:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.740 22:03:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.740 22:03:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.740 22:03:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.740 22:03:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.740 22:03:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.740 22:03:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.740 22:03:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.740 22:03:11 -- paths/export.sh@5 -- # export PATH 00:11:29.740 22:03:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.740 22:03:11 -- nvmf/common.sh@47 -- # : 0 00:11:29.740 22:03:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:29.740 22:03:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:29.740 22:03:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.740 22:03:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.740 22:03:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.740 22:03:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:29.740 22:03:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:29.740 22:03:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:29.740 22:03:11 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:29.740 22:03:11 -- target/ns_masking.sh@11 -- # loops=5 00:11:29.740 22:03:11 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:29.740 22:03:11 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:11:29.740 22:03:11 -- target/ns_masking.sh@15 -- # uuidgen 00:11:29.740 22:03:11 -- target/ns_masking.sh@15 -- # HOSTID=c12e24a7-c710-4491-83a1-a3497f6229e0 00:11:29.740 22:03:11 -- target/ns_masking.sh@44 -- # nvmftestinit 00:11:29.740 22:03:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:29.740 22:03:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.740 22:03:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:29.740 22:03:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:29.740 22:03:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:29.740 22:03:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.740 22:03:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:29.740 22:03:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.740 22:03:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:29.740 22:03:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:29.741 22:03:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:29.741 22:03:11 -- common/autotest_common.sh@10 -- # set +x 00:11:32.287 22:03:13 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:32.287 22:03:13 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:32.287 22:03:13 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:32.287 22:03:13 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:32.287 22:03:13 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:32.287 22:03:13 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:32.287 22:03:13 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:32.287 22:03:13 -- nvmf/common.sh@295 -- # net_devs=() 00:11:32.287 22:03:13 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:32.287 22:03:13 -- nvmf/common.sh@296 -- # e810=() 00:11:32.287 22:03:13 -- nvmf/common.sh@296 -- # local -ga e810 00:11:32.287 22:03:13 -- nvmf/common.sh@297 -- # x722=() 00:11:32.287 22:03:13 -- nvmf/common.sh@297 -- # local -ga x722 00:11:32.287 22:03:13 -- nvmf/common.sh@298 -- # mlx=() 00:11:32.287 22:03:13 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:32.287 22:03:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:32.287 22:03:13 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:32.287 22:03:13 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:32.287 22:03:13 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:32.287 22:03:13 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:32.287 22:03:13 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:32.287 22:03:13 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:32.287 22:03:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:32.287 22:03:13 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:32.287 22:03:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:32.287 22:03:13 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:32.287 22:03:13 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:32.287 22:03:13 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:32.287 22:03:13 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:32.287 22:03:13 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:32.287 22:03:13 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:32.287 22:03:13 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:32.287 22:03:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:32.287 22:03:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:32.287 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:32.287 22:03:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:32.287 22:03:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:32.287 22:03:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.287 22:03:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.287 22:03:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:32.287 22:03:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:32.287 22:03:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:32.287 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:32.287 22:03:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:32.287 22:03:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:32.287 22:03:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.287 22:03:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.287 22:03:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:32.287 22:03:13 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:32.287 22:03:13 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:32.287 22:03:13 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:32.287 22:03:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:32.287 22:03:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.287 22:03:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:32.287 22:03:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.287 22:03:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:32.287 Found net devices under 0000:84:00.0: cvl_0_0 00:11:32.287 22:03:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.287 22:03:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:32.287 22:03:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.287 22:03:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:32.287 22:03:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.287 22:03:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:32.287 Found net devices under 0000:84:00.1: cvl_0_1 00:11:32.287 22:03:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.287 22:03:13 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:32.287 22:03:13 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:32.287 22:03:13 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:32.287 22:03:13 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:32.287 22:03:13 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:32.287 22:03:13 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:32.287 22:03:13 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:32.287 22:03:13 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:32.287 22:03:13 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:32.287 22:03:13 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:32.287 22:03:13 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:32.287 22:03:13 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:32.287 22:03:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:32.287 22:03:13 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:32.287 22:03:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:32.287 22:03:13 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:32.287 22:03:13 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:32.287 22:03:13 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:32.287 22:03:13 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:32.287 22:03:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:32.287 22:03:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:32.287 22:03:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:32.287 22:03:14 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:32.287 22:03:14 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:32.287 22:03:14 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:32.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:32.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:11:32.287 00:11:32.287 --- 10.0.0.2 ping statistics --- 00:11:32.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.287 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:11:32.287 22:03:14 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:32.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:32.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:11:32.287 00:11:32.287 --- 10.0.0.1 ping statistics --- 00:11:32.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.287 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:11:32.287 22:03:14 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:32.287 22:03:14 -- nvmf/common.sh@411 -- # return 0 00:11:32.287 22:03:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:32.287 22:03:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:32.287 22:03:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:32.287 22:03:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:32.287 22:03:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:32.287 22:03:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:32.287 22:03:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:32.287 22:03:14 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:11:32.287 22:03:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:32.287 22:03:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:32.287 22:03:14 -- common/autotest_common.sh@10 -- # set +x 00:11:32.287 22:03:14 -- nvmf/common.sh@470 -- # nvmfpid=3898570 00:11:32.287 22:03:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:32.287 22:03:14 -- nvmf/common.sh@471 -- # waitforlisten 3898570 00:11:32.287 22:03:14 -- common/autotest_common.sh@817 -- # '[' -z 3898570 ']' 00:11:32.287 22:03:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.287 22:03:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:32.287 22:03:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.287 22:03:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:32.287 22:03:14 -- common/autotest_common.sh@10 -- # set +x 00:11:32.287 [2024-04-24 22:03:14.159529] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:11:32.287 [2024-04-24 22:03:14.159615] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.287 EAL: No free 2048 kB hugepages reported on node 1 00:11:32.287 [2024-04-24 22:03:14.235390] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:32.287 [2024-04-24 22:03:14.355638] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.287 [2024-04-24 22:03:14.355704] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.287 [2024-04-24 22:03:14.355721] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.287 [2024-04-24 22:03:14.355735] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.287 [2024-04-24 22:03:14.355747] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.287 [2024-04-24 22:03:14.355850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.287 [2024-04-24 22:03:14.355927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.287 [2024-04-24 22:03:14.355977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:32.287 [2024-04-24 22:03:14.355980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.287 22:03:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:32.287 22:03:14 -- common/autotest_common.sh@850 -- # return 0 00:11:32.287 22:03:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:32.287 22:03:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:32.287 22:03:14 -- common/autotest_common.sh@10 -- # set +x 00:11:32.287 22:03:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:32.287 22:03:14 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:32.851 [2024-04-24 22:03:14.826498] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:32.851 22:03:14 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:11:32.851 22:03:14 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:11:32.851 22:03:14 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:33.109 Malloc1 00:11:33.109 22:03:15 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:33.675 Malloc2 00:11:33.675 22:03:15 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:33.933 22:03:16 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:34.191 22:03:16 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:34.757 [2024-04-24 22:03:16.903961] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:34.757 [2024-04-24 22:03:16.904292] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.757 22:03:16 -- target/ns_masking.sh@61 -- # connect 00:11:34.757 22:03:16 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c12e24a7-c710-4491-83a1-a3497f6229e0 -a 10.0.0.2 -s 4420 -i 4 00:11:35.015 22:03:17 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:11:35.015 22:03:17 -- common/autotest_common.sh@1184 -- # local i=0 00:11:35.015 22:03:17 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:35.015 22:03:17 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:35.015 22:03:17 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:36.912 22:03:19 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:36.912 22:03:19 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:36.912 22:03:19 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:36.912 22:03:19 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:36.912 22:03:19 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:36.912 22:03:19 -- common/autotest_common.sh@1194 -- # return 0 00:11:36.912 22:03:19 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:36.912 22:03:19 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:36.912 22:03:19 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:36.912 22:03:19 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:36.912 22:03:19 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:11:36.912 22:03:19 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:36.912 22:03:19 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:36.912 [ 0]:0x1 00:11:36.912 22:03:19 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:36.912 22:03:19 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:36.912 22:03:19 -- target/ns_masking.sh@40 -- # nguid=2509aac804664b18b91f5dee0e27d5b0 00:11:36.912 22:03:19 -- target/ns_masking.sh@41 -- # [[ 2509aac804664b18b91f5dee0e27d5b0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:36.912 22:03:19 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:37.477 22:03:19 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:11:37.477 22:03:19 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:37.477 22:03:19 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:37.477 [ 0]:0x1 00:11:37.477 22:03:19 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:37.477 22:03:19 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:37.477 22:03:19 -- target/ns_masking.sh@40 -- # nguid=2509aac804664b18b91f5dee0e27d5b0 00:11:37.477 22:03:19 -- target/ns_masking.sh@41 -- # [[ 2509aac804664b18b91f5dee0e27d5b0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:37.477 22:03:19 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:11:37.477 22:03:19 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:37.477 22:03:19 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:37.477 [ 1]:0x2 00:11:37.477 22:03:19 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:37.477 22:03:19 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:37.477 22:03:19 -- target/ns_masking.sh@40 -- # nguid=5c47a0f606c54f98947542166e6a9d5b 00:11:37.477 22:03:19 -- target/ns_masking.sh@41 -- # [[ 5c47a0f606c54f98947542166e6a9d5b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:37.478 22:03:19 -- target/ns_masking.sh@69 -- # disconnect 00:11:37.478 22:03:19 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:37.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.736 22:03:19 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.994 22:03:20 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:38.252 22:03:20 -- target/ns_masking.sh@77 -- # connect 1 00:11:38.253 22:03:20 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c12e24a7-c710-4491-83a1-a3497f6229e0 -a 10.0.0.2 -s 4420 -i 4 00:11:38.511 22:03:20 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:38.511 22:03:20 -- common/autotest_common.sh@1184 -- # local i=0 00:11:38.511 22:03:20 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:38.511 22:03:20 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:11:38.511 22:03:20 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:11:38.511 22:03:20 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:40.453 22:03:22 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:40.453 22:03:22 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:40.453 22:03:22 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:40.453 22:03:22 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:40.453 22:03:22 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:40.453 22:03:22 -- common/autotest_common.sh@1194 -- # return 0 00:11:40.453 22:03:22 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:40.453 22:03:22 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:40.453 22:03:22 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:40.453 22:03:22 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:40.453 22:03:22 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:11:40.453 22:03:22 -- common/autotest_common.sh@638 -- # local es=0 00:11:40.453 22:03:22 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:40.453 22:03:22 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:40.453 22:03:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:40.453 22:03:22 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:40.453 22:03:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:40.453 22:03:22 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:40.453 22:03:22 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:40.453 22:03:22 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:40.453 22:03:22 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:40.453 22:03:22 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:40.453 22:03:22 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:40.454 22:03:22 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:40.454 22:03:22 -- common/autotest_common.sh@641 -- # es=1 00:11:40.454 22:03:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:40.454 22:03:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:40.454 22:03:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:40.454 22:03:22 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:11:40.710 22:03:22 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:40.710 22:03:22 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:40.710 [ 0]:0x2 00:11:40.710 22:03:22 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:40.710 22:03:22 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:40.710 22:03:22 -- target/ns_masking.sh@40 -- # nguid=5c47a0f606c54f98947542166e6a9d5b 00:11:40.710 22:03:22 -- target/ns_masking.sh@41 -- # [[ 5c47a0f606c54f98947542166e6a9d5b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:40.710 22:03:22 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:40.967 22:03:23 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:11:40.967 22:03:23 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:40.967 22:03:23 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:40.967 [ 0]:0x1 00:11:40.967 22:03:23 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:40.967 22:03:23 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:40.967 22:03:23 -- target/ns_masking.sh@40 -- # nguid=2509aac804664b18b91f5dee0e27d5b0 00:11:40.967 22:03:23 -- target/ns_masking.sh@41 -- # [[ 2509aac804664b18b91f5dee0e27d5b0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:40.967 22:03:23 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:11:40.967 22:03:23 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:40.967 22:03:23 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:40.967 [ 1]:0x2 00:11:40.967 22:03:23 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:40.967 22:03:23 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:40.967 22:03:23 -- target/ns_masking.sh@40 -- # nguid=5c47a0f606c54f98947542166e6a9d5b 00:11:40.967 22:03:23 -- target/ns_masking.sh@41 -- # [[ 5c47a0f606c54f98947542166e6a9d5b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:40.967 22:03:23 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:41.227 22:03:23 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:11:41.227 22:03:23 -- common/autotest_common.sh@638 -- # local es=0 00:11:41.227 22:03:23 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:41.227 22:03:23 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:41.227 22:03:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:41.227 22:03:23 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:41.227 22:03:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:41.227 22:03:23 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:41.227 22:03:23 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:41.227 22:03:23 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:41.484 22:03:23 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:41.484 22:03:23 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:41.484 22:03:23 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:41.484 22:03:23 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:41.484 22:03:23 -- common/autotest_common.sh@641 -- # es=1 00:11:41.484 22:03:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:41.484 22:03:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:41.484 22:03:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:41.484 22:03:23 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:11:41.484 22:03:23 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:41.484 22:03:23 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:41.484 [ 0]:0x2 00:11:41.484 22:03:23 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:41.484 22:03:23 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:41.484 22:03:23 -- target/ns_masking.sh@40 -- # nguid=5c47a0f606c54f98947542166e6a9d5b 00:11:41.484 22:03:23 -- target/ns_masking.sh@41 -- # [[ 5c47a0f606c54f98947542166e6a9d5b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:41.484 22:03:23 -- target/ns_masking.sh@91 -- # disconnect 00:11:41.484 22:03:23 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:41.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.484 22:03:23 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:41.742 22:03:23 -- target/ns_masking.sh@95 -- # connect 2 00:11:41.742 22:03:23 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c12e24a7-c710-4491-83a1-a3497f6229e0 -a 10.0.0.2 -s 4420 -i 4 00:11:42.000 22:03:24 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:42.000 22:03:24 -- common/autotest_common.sh@1184 -- # local i=0 00:11:42.000 22:03:24 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:42.000 22:03:24 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:11:42.000 22:03:24 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:11:42.000 22:03:24 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:43.899 22:03:26 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:43.899 22:03:26 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:43.899 22:03:26 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:43.899 22:03:26 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:11:43.899 22:03:26 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:43.899 22:03:26 -- common/autotest_common.sh@1194 -- # return 0 00:11:43.899 22:03:26 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:43.899 22:03:26 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:44.157 22:03:26 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:44.157 22:03:26 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:44.157 22:03:26 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:11:44.157 22:03:26 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:44.157 22:03:26 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:44.157 [ 0]:0x1 00:11:44.157 22:03:26 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:44.157 22:03:26 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:44.157 22:03:26 -- target/ns_masking.sh@40 -- # nguid=2509aac804664b18b91f5dee0e27d5b0 00:11:44.157 22:03:26 -- target/ns_masking.sh@41 -- # [[ 2509aac804664b18b91f5dee0e27d5b0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:44.157 22:03:26 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:11:44.157 22:03:26 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:44.157 22:03:26 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:44.157 [ 1]:0x2 00:11:44.157 22:03:26 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:44.157 22:03:26 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:44.157 22:03:26 -- target/ns_masking.sh@40 -- # nguid=5c47a0f606c54f98947542166e6a9d5b 00:11:44.157 22:03:26 -- target/ns_masking.sh@41 -- # [[ 5c47a0f606c54f98947542166e6a9d5b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:44.157 22:03:26 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:44.722 22:03:26 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:11:44.723 22:03:26 -- common/autotest_common.sh@638 -- # local es=0 00:11:44.723 22:03:26 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:44.723 22:03:26 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:44.723 22:03:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:44.723 22:03:26 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:44.723 22:03:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:44.723 22:03:26 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:44.723 22:03:26 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:44.723 22:03:26 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:44.723 22:03:26 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:44.723 22:03:26 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:44.981 22:03:26 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:44.981 22:03:26 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:44.981 22:03:26 -- common/autotest_common.sh@641 -- # es=1 00:11:44.981 22:03:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:44.981 22:03:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:44.981 22:03:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:44.981 22:03:26 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:11:44.981 22:03:26 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:44.981 22:03:26 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:44.981 [ 0]:0x2 00:11:44.981 22:03:26 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:44.981 22:03:26 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:44.981 22:03:27 -- target/ns_masking.sh@40 -- # nguid=5c47a0f606c54f98947542166e6a9d5b 00:11:44.981 22:03:27 -- target/ns_masking.sh@41 -- # [[ 5c47a0f606c54f98947542166e6a9d5b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:44.981 22:03:27 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:44.981 22:03:27 -- common/autotest_common.sh@638 -- # local es=0 00:11:44.981 22:03:27 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:44.981 22:03:27 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:44.981 22:03:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:44.981 22:03:27 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:44.981 22:03:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:44.981 22:03:27 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:44.981 22:03:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:44.981 22:03:27 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:44.981 22:03:27 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:44.981 22:03:27 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:45.239 [2024-04-24 22:03:27.299264] nvmf_rpc.c:1787:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:45.239 request: 00:11:45.239 { 00:11:45.239 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:45.239 "nsid": 2, 00:11:45.239 "host": "nqn.2016-06.io.spdk:host1", 00:11:45.239 "method": "nvmf_ns_remove_host", 00:11:45.239 "req_id": 1 00:11:45.239 } 00:11:45.239 Got JSON-RPC error response 00:11:45.239 response: 00:11:45.239 { 00:11:45.239 "code": -32602, 00:11:45.239 "message": "Invalid parameters" 00:11:45.239 } 00:11:45.239 22:03:27 -- common/autotest_common.sh@641 -- # es=1 00:11:45.239 22:03:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:45.239 22:03:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:45.239 22:03:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:45.239 22:03:27 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:11:45.239 22:03:27 -- common/autotest_common.sh@638 -- # local es=0 00:11:45.240 22:03:27 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:45.240 22:03:27 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:45.240 22:03:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:45.240 22:03:27 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:45.240 22:03:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:45.240 22:03:27 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:45.240 22:03:27 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:45.240 22:03:27 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:45.240 22:03:27 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:45.240 22:03:27 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:45.240 22:03:27 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:45.240 22:03:27 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:45.240 22:03:27 -- common/autotest_common.sh@641 -- # es=1 00:11:45.240 22:03:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:45.240 22:03:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:45.240 22:03:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:45.240 22:03:27 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:11:45.240 22:03:27 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:45.240 22:03:27 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:45.240 [ 0]:0x2 00:11:45.240 22:03:27 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:45.240 22:03:27 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:45.240 22:03:27 -- target/ns_masking.sh@40 -- # nguid=5c47a0f606c54f98947542166e6a9d5b 00:11:45.240 22:03:27 -- target/ns_masking.sh@41 -- # [[ 5c47a0f606c54f98947542166e6a9d5b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:45.240 22:03:27 -- target/ns_masking.sh@108 -- # disconnect 00:11:45.240 22:03:27 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:45.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.498 22:03:27 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:45.756 22:03:27 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:11:45.756 22:03:27 -- target/ns_masking.sh@114 -- # nvmftestfini 00:11:45.756 22:03:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:45.756 22:03:27 -- nvmf/common.sh@117 -- # sync 00:11:45.756 22:03:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:45.756 22:03:27 -- nvmf/common.sh@120 -- # set +e 00:11:45.756 22:03:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:45.756 22:03:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:45.756 rmmod nvme_tcp 00:11:45.756 rmmod nvme_fabrics 00:11:45.756 rmmod nvme_keyring 00:11:45.756 22:03:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:45.756 22:03:27 -- nvmf/common.sh@124 -- # set -e 00:11:45.756 22:03:27 -- nvmf/common.sh@125 -- # return 0 00:11:45.756 22:03:27 -- nvmf/common.sh@478 -- # '[' -n 3898570 ']' 00:11:45.756 22:03:27 -- nvmf/common.sh@479 -- # killprocess 3898570 00:11:45.756 22:03:27 -- common/autotest_common.sh@936 -- # '[' -z 3898570 ']' 00:11:45.756 22:03:27 -- common/autotest_common.sh@940 -- # kill -0 3898570 00:11:45.756 22:03:27 -- common/autotest_common.sh@941 -- # uname 00:11:45.756 22:03:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:45.756 22:03:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3898570 00:11:45.756 22:03:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:45.756 22:03:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:45.756 22:03:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3898570' 00:11:45.756 killing process with pid 3898570 00:11:45.756 22:03:27 -- common/autotest_common.sh@955 -- # kill 3898570 00:11:45.756 [2024-04-24 22:03:27.992352] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:45.756 22:03:27 -- common/autotest_common.sh@960 -- # wait 3898570 00:11:46.352 22:03:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:46.352 22:03:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:46.352 22:03:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:46.352 22:03:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:46.352 22:03:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:46.352 22:03:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.352 22:03:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:46.352 22:03:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.255 22:03:30 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:48.255 00:11:48.255 real 0m18.754s 00:11:48.255 user 1m0.669s 00:11:48.255 sys 0m4.135s 00:11:48.255 22:03:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:48.255 22:03:30 -- common/autotest_common.sh@10 -- # set +x 00:11:48.255 ************************************ 00:11:48.255 END TEST nvmf_ns_masking 00:11:48.255 ************************************ 00:11:48.255 22:03:30 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:48.255 22:03:30 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:48.255 22:03:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:48.255 22:03:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:48.255 22:03:30 -- common/autotest_common.sh@10 -- # set +x 00:11:48.514 ************************************ 00:11:48.514 START TEST nvmf_nvme_cli 00:11:48.514 ************************************ 00:11:48.514 22:03:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:48.514 * Looking for test storage... 00:11:48.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.514 22:03:30 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.514 22:03:30 -- nvmf/common.sh@7 -- # uname -s 00:11:48.514 22:03:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.514 22:03:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.514 22:03:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.514 22:03:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.514 22:03:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.514 22:03:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.514 22:03:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.514 22:03:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.514 22:03:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.514 22:03:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.514 22:03:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:48.514 22:03:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:48.514 22:03:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.514 22:03:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.514 22:03:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.514 22:03:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.514 22:03:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.514 22:03:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.514 22:03:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.514 22:03:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.514 22:03:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.514 22:03:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.514 22:03:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.514 22:03:30 -- paths/export.sh@5 -- # export PATH 00:11:48.514 22:03:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.514 22:03:30 -- nvmf/common.sh@47 -- # : 0 00:11:48.514 22:03:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:48.514 22:03:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:48.514 22:03:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.514 22:03:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.514 22:03:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.514 22:03:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:48.514 22:03:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:48.514 22:03:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:48.514 22:03:30 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:48.514 22:03:30 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:48.514 22:03:30 -- target/nvme_cli.sh@14 -- # devs=() 00:11:48.514 22:03:30 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:48.514 22:03:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:48.514 22:03:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.514 22:03:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:48.514 22:03:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:48.514 22:03:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:48.514 22:03:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.514 22:03:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:48.514 22:03:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.514 22:03:30 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:48.514 22:03:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:48.514 22:03:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:48.514 22:03:30 -- common/autotest_common.sh@10 -- # set +x 00:11:51.045 22:03:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:51.045 22:03:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:51.045 22:03:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:51.045 22:03:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:51.045 22:03:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:51.045 22:03:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:51.045 22:03:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:51.045 22:03:32 -- nvmf/common.sh@295 -- # net_devs=() 00:11:51.045 22:03:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:51.045 22:03:32 -- nvmf/common.sh@296 -- # e810=() 00:11:51.045 22:03:32 -- nvmf/common.sh@296 -- # local -ga e810 00:11:51.045 22:03:32 -- nvmf/common.sh@297 -- # x722=() 00:11:51.045 22:03:32 -- nvmf/common.sh@297 -- # local -ga x722 00:11:51.045 22:03:32 -- nvmf/common.sh@298 -- # mlx=() 00:11:51.045 22:03:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:51.045 22:03:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.045 22:03:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.045 22:03:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.045 22:03:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.045 22:03:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.045 22:03:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.045 22:03:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.045 22:03:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.045 22:03:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.045 22:03:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.045 22:03:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.045 22:03:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:51.045 22:03:32 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:51.045 22:03:32 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:51.045 22:03:32 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:51.045 22:03:32 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:51.045 22:03:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:51.045 22:03:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:51.045 22:03:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:51.045 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:51.045 22:03:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:51.045 22:03:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:51.045 22:03:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.045 22:03:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.045 22:03:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:51.045 22:03:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:51.045 22:03:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:51.045 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:51.045 22:03:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:51.045 22:03:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:51.045 22:03:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.045 22:03:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.045 22:03:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:51.045 22:03:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:51.045 22:03:32 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:51.045 22:03:32 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:51.045 22:03:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:51.045 22:03:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.045 22:03:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:51.045 22:03:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.045 22:03:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:51.045 Found net devices under 0000:84:00.0: cvl_0_0 00:11:51.045 22:03:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.045 22:03:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:51.045 22:03:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.045 22:03:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:51.045 22:03:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.045 22:03:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:51.045 Found net devices under 0000:84:00.1: cvl_0_1 00:11:51.045 22:03:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.045 22:03:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:51.045 22:03:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:51.045 22:03:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:51.045 22:03:32 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:51.045 22:03:32 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:51.045 22:03:32 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:51.045 22:03:32 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:51.045 22:03:32 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:51.045 22:03:32 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:51.045 22:03:32 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:51.045 22:03:32 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:51.045 22:03:32 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:51.045 22:03:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:51.045 22:03:32 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:51.045 22:03:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:51.045 22:03:32 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:51.045 22:03:32 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:51.045 22:03:32 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:51.045 22:03:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:51.045 22:03:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:51.045 22:03:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:51.045 22:03:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:51.045 22:03:33 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:51.045 22:03:33 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:51.045 22:03:33 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:51.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:51.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:11:51.046 00:11:51.046 --- 10.0.0.2 ping statistics --- 00:11:51.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.046 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:11:51.046 22:03:33 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:51.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:51.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:11:51.046 00:11:51.046 --- 10.0.0.1 ping statistics --- 00:11:51.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.046 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:11:51.046 22:03:33 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:51.046 22:03:33 -- nvmf/common.sh@411 -- # return 0 00:11:51.046 22:03:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:51.046 22:03:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:51.046 22:03:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:51.046 22:03:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:51.046 22:03:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:51.046 22:03:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:51.046 22:03:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:51.046 22:03:33 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:51.046 22:03:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:51.046 22:03:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:51.046 22:03:33 -- common/autotest_common.sh@10 -- # set +x 00:11:51.046 22:03:33 -- nvmf/common.sh@470 -- # nvmfpid=3902404 00:11:51.046 22:03:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:51.046 22:03:33 -- nvmf/common.sh@471 -- # waitforlisten 3902404 00:11:51.046 22:03:33 -- common/autotest_common.sh@817 -- # '[' -z 3902404 ']' 00:11:51.046 22:03:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.046 22:03:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:51.046 22:03:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.046 22:03:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:51.046 22:03:33 -- common/autotest_common.sh@10 -- # set +x 00:11:51.046 [2024-04-24 22:03:33.213247] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:11:51.046 [2024-04-24 22:03:33.213355] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.046 EAL: No free 2048 kB hugepages reported on node 1 00:11:51.046 [2024-04-24 22:03:33.295990] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:51.304 [2024-04-24 22:03:33.421035] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.304 [2024-04-24 22:03:33.421101] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.304 [2024-04-24 22:03:33.421118] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.304 [2024-04-24 22:03:33.421131] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.304 [2024-04-24 22:03:33.421144] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.304 [2024-04-24 22:03:33.421254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.304 [2024-04-24 22:03:33.421327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.304 [2024-04-24 22:03:33.421379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:51.304 [2024-04-24 22:03:33.421382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.304 22:03:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:51.304 22:03:33 -- common/autotest_common.sh@850 -- # return 0 00:11:51.304 22:03:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:51.304 22:03:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:51.304 22:03:33 -- common/autotest_common.sh@10 -- # set +x 00:11:51.561 22:03:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.561 22:03:33 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:51.561 22:03:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:51.561 22:03:33 -- common/autotest_common.sh@10 -- # set +x 00:11:51.561 [2024-04-24 22:03:33.589351] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.561 22:03:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:51.561 22:03:33 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:51.561 22:03:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:51.561 22:03:33 -- common/autotest_common.sh@10 -- # set +x 00:11:51.561 Malloc0 00:11:51.561 22:03:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:51.561 22:03:33 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:51.561 22:03:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:51.561 22:03:33 -- common/autotest_common.sh@10 -- # set +x 00:11:51.561 Malloc1 00:11:51.561 22:03:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:51.562 22:03:33 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:51.562 22:03:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:51.562 22:03:33 -- common/autotest_common.sh@10 -- # set +x 00:11:51.562 22:03:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:51.562 22:03:33 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:51.562 22:03:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:51.562 22:03:33 -- common/autotest_common.sh@10 -- # set +x 00:11:51.562 22:03:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:51.562 22:03:33 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:51.562 22:03:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:51.562 22:03:33 -- common/autotest_common.sh@10 -- # set +x 00:11:51.562 22:03:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:51.562 22:03:33 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.562 22:03:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:51.562 22:03:33 -- common/autotest_common.sh@10 -- # set +x 00:11:51.562 [2024-04-24 22:03:33.676943] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:51.562 [2024-04-24 22:03:33.677262] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.562 22:03:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:51.562 22:03:33 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:51.562 22:03:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:51.562 22:03:33 -- common/autotest_common.sh@10 -- # set +x 00:11:51.562 22:03:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:51.562 22:03:33 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:11:51.562 00:11:51.562 Discovery Log Number of Records 2, Generation counter 2 00:11:51.562 =====Discovery Log Entry 0====== 00:11:51.562 trtype: tcp 00:11:51.562 adrfam: ipv4 00:11:51.562 subtype: current discovery subsystem 00:11:51.562 treq: not required 00:11:51.562 portid: 0 00:11:51.562 trsvcid: 4420 00:11:51.562 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:51.562 traddr: 10.0.0.2 00:11:51.562 eflags: explicit discovery connections, duplicate discovery information 00:11:51.562 sectype: none 00:11:51.562 =====Discovery Log Entry 1====== 00:11:51.562 trtype: tcp 00:11:51.562 adrfam: ipv4 00:11:51.562 subtype: nvme subsystem 00:11:51.562 treq: not required 00:11:51.562 portid: 0 00:11:51.562 trsvcid: 4420 00:11:51.562 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:51.562 traddr: 10.0.0.2 00:11:51.562 eflags: none 00:11:51.562 sectype: none 00:11:51.562 22:03:33 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:51.562 22:03:33 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:51.562 22:03:33 -- nvmf/common.sh@511 -- # local dev _ 00:11:51.562 22:03:33 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:51.562 22:03:33 -- nvmf/common.sh@510 -- # nvme list 00:11:51.562 22:03:33 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:11:51.562 22:03:33 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:51.562 22:03:33 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:11:51.562 22:03:33 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:51.562 22:03:33 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:51.562 22:03:33 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:52.127 22:03:34 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:52.127 22:03:34 -- common/autotest_common.sh@1184 -- # local i=0 00:11:52.127 22:03:34 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:52.127 22:03:34 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:11:52.127 22:03:34 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:11:52.127 22:03:34 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:54.673 22:03:36 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:54.673 22:03:36 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:54.673 22:03:36 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.673 22:03:36 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:11:54.673 22:03:36 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.673 22:03:36 -- common/autotest_common.sh@1194 -- # return 0 00:11:54.673 22:03:36 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:54.673 22:03:36 -- nvmf/common.sh@511 -- # local dev _ 00:11:54.673 22:03:36 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:54.673 22:03:36 -- nvmf/common.sh@510 -- # nvme list 00:11:54.673 22:03:36 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:11:54.673 22:03:36 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:54.673 22:03:36 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:11:54.673 22:03:36 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:54.673 22:03:36 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:54.673 22:03:36 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:11:54.673 22:03:36 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:54.673 22:03:36 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:54.673 22:03:36 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:11:54.673 22:03:36 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:54.673 22:03:36 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:54.673 /dev/nvme0n1 ]] 00:11:54.673 22:03:36 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:54.673 22:03:36 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:54.673 22:03:36 -- nvmf/common.sh@511 -- # local dev _ 00:11:54.673 22:03:36 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:54.673 22:03:36 -- nvmf/common.sh@510 -- # nvme list 00:11:54.673 22:03:36 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:11:54.673 22:03:36 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:54.673 22:03:36 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:11:54.673 22:03:36 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:54.673 22:03:36 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:54.673 22:03:36 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:11:54.673 22:03:36 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:54.673 22:03:36 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:54.673 22:03:36 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:11:54.673 22:03:36 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:54.673 22:03:36 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:54.673 22:03:36 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:54.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.673 22:03:36 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:54.673 22:03:36 -- common/autotest_common.sh@1205 -- # local i=0 00:11:54.673 22:03:36 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:54.673 22:03:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.673 22:03:36 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:54.673 22:03:36 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.673 22:03:36 -- common/autotest_common.sh@1217 -- # return 0 00:11:54.673 22:03:36 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:54.673 22:03:36 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.673 22:03:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.673 22:03:36 -- common/autotest_common.sh@10 -- # set +x 00:11:54.673 22:03:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.673 22:03:36 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:54.673 22:03:36 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:54.673 22:03:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:54.673 22:03:36 -- nvmf/common.sh@117 -- # sync 00:11:54.673 22:03:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:54.673 22:03:36 -- nvmf/common.sh@120 -- # set +e 00:11:54.673 22:03:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:54.673 22:03:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:54.673 rmmod nvme_tcp 00:11:54.673 rmmod nvme_fabrics 00:11:54.673 rmmod nvme_keyring 00:11:54.673 22:03:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:54.673 22:03:36 -- nvmf/common.sh@124 -- # set -e 00:11:54.673 22:03:36 -- nvmf/common.sh@125 -- # return 0 00:11:54.673 22:03:36 -- nvmf/common.sh@478 -- # '[' -n 3902404 ']' 00:11:54.673 22:03:36 -- nvmf/common.sh@479 -- # killprocess 3902404 00:11:54.673 22:03:36 -- common/autotest_common.sh@936 -- # '[' -z 3902404 ']' 00:11:54.673 22:03:36 -- common/autotest_common.sh@940 -- # kill -0 3902404 00:11:54.673 22:03:36 -- common/autotest_common.sh@941 -- # uname 00:11:54.673 22:03:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:54.673 22:03:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3902404 00:11:54.673 22:03:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:54.673 22:03:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:54.673 22:03:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3902404' 00:11:54.673 killing process with pid 3902404 00:11:54.673 22:03:36 -- common/autotest_common.sh@955 -- # kill 3902404 00:11:54.673 [2024-04-24 22:03:36.568955] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:54.673 22:03:36 -- common/autotest_common.sh@960 -- # wait 3902404 00:11:54.673 22:03:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:54.673 22:03:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:54.673 22:03:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:54.673 22:03:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:54.673 22:03:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:54.673 22:03:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.673 22:03:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:54.673 22:03:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.202 22:03:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:57.202 00:11:57.202 real 0m8.415s 00:11:57.202 user 0m14.371s 00:11:57.202 sys 0m2.436s 00:11:57.202 22:03:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:57.202 22:03:38 -- common/autotest_common.sh@10 -- # set +x 00:11:57.202 ************************************ 00:11:57.202 END TEST nvmf_nvme_cli 00:11:57.202 ************************************ 00:11:57.202 22:03:38 -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:11:57.202 22:03:38 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:57.202 22:03:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:57.202 22:03:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:57.202 22:03:38 -- common/autotest_common.sh@10 -- # set +x 00:11:57.202 ************************************ 00:11:57.202 START TEST nvmf_vfio_user 00:11:57.202 ************************************ 00:11:57.202 22:03:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:57.202 * Looking for test storage... 00:11:57.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:57.202 22:03:39 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:57.202 22:03:39 -- nvmf/common.sh@7 -- # uname -s 00:11:57.202 22:03:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:57.202 22:03:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:57.202 22:03:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:57.202 22:03:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:57.202 22:03:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:57.202 22:03:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:57.202 22:03:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:57.202 22:03:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:57.202 22:03:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:57.202 22:03:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:57.202 22:03:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:57.202 22:03:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:57.202 22:03:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:57.202 22:03:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:57.202 22:03:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:57.202 22:03:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:57.202 22:03:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:57.202 22:03:39 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.202 22:03:39 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.202 22:03:39 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.202 22:03:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.202 22:03:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.202 22:03:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.202 22:03:39 -- paths/export.sh@5 -- # export PATH 00:11:57.202 22:03:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.202 22:03:39 -- nvmf/common.sh@47 -- # : 0 00:11:57.202 22:03:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:57.202 22:03:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:57.202 22:03:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:57.202 22:03:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:57.202 22:03:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:57.202 22:03:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:57.202 22:03:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:57.202 22:03:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:57.202 22:03:39 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:57.202 22:03:39 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:57.202 22:03:39 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:57.202 22:03:39 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:57.202 22:03:39 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:57.202 22:03:39 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:57.202 22:03:39 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:57.203 22:03:39 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:57.203 22:03:39 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:57.203 22:03:39 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:57.203 22:03:39 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3903214 00:11:57.203 22:03:39 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:57.203 22:03:39 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3903214' 00:11:57.203 Process pid: 3903214 00:11:57.203 22:03:39 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:57.203 22:03:39 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3903214 00:11:57.203 22:03:39 -- common/autotest_common.sh@817 -- # '[' -z 3903214 ']' 00:11:57.203 22:03:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.203 22:03:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:57.203 22:03:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.203 22:03:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:57.203 22:03:39 -- common/autotest_common.sh@10 -- # set +x 00:11:57.203 [2024-04-24 22:03:39.248013] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:11:57.203 [2024-04-24 22:03:39.248107] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.203 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.203 [2024-04-24 22:03:39.320954] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.203 [2024-04-24 22:03:39.443785] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.203 [2024-04-24 22:03:39.443847] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.203 [2024-04-24 22:03:39.443863] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.203 [2024-04-24 22:03:39.443876] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.203 [2024-04-24 22:03:39.443888] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.203 [2024-04-24 22:03:39.443974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.203 [2024-04-24 22:03:39.444026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.203 [2024-04-24 22:03:39.444051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.203 [2024-04-24 22:03:39.444055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.461 22:03:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:57.461 22:03:39 -- common/autotest_common.sh@850 -- # return 0 00:11:57.461 22:03:39 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:58.394 22:03:40 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:58.653 22:03:40 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:58.653 22:03:40 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:58.653 22:03:40 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:58.653 22:03:40 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:58.653 22:03:40 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:58.911 Malloc1 00:11:58.911 22:03:41 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:59.478 22:03:41 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:59.736 22:03:41 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:00.023 [2024-04-24 22:03:42.017887] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:00.023 22:03:42 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:00.023 22:03:42 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:00.023 22:03:42 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:00.292 Malloc2 00:12:00.292 22:03:42 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:00.550 22:03:42 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:01.115 22:03:43 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:01.375 22:03:43 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:01.375 22:03:43 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:01.375 22:03:43 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:01.375 22:03:43 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:01.375 22:03:43 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:01.375 22:03:43 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:01.375 [2024-04-24 22:03:43.427146] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:01.375 [2024-04-24 22:03:43.427193] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3903758 ] 00:12:01.375 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.375 [2024-04-24 22:03:43.464081] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:01.375 [2024-04-24 22:03:43.472938] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:01.375 [2024-04-24 22:03:43.472971] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff681319000 00:12:01.375 [2024-04-24 22:03:43.473916] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:01.375 [2024-04-24 22:03:43.474919] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:01.375 [2024-04-24 22:03:43.475949] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:01.375 [2024-04-24 22:03:43.476930] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:01.375 [2024-04-24 22:03:43.477946] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:01.375 [2024-04-24 22:03:43.478941] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:01.375 [2024-04-24 22:03:43.479946] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:01.375 [2024-04-24 22:03:43.480952] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:01.376 [2024-04-24 22:03:43.481959] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:01.376 [2024-04-24 22:03:43.481986] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff68130e000 00:12:01.376 [2024-04-24 22:03:43.483267] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:01.376 [2024-04-24 22:03:43.503094] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:01.376 [2024-04-24 22:03:43.503143] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:01.376 [2024-04-24 22:03:43.506108] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:01.376 [2024-04-24 22:03:43.506169] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:01.376 [2024-04-24 22:03:43.506275] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:01.376 [2024-04-24 22:03:43.506313] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:01.376 [2024-04-24 22:03:43.506325] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:01.376 [2024-04-24 22:03:43.507100] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:01.376 [2024-04-24 22:03:43.507122] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:01.376 [2024-04-24 22:03:43.507136] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:01.376 [2024-04-24 22:03:43.508104] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:01.376 [2024-04-24 22:03:43.508127] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:01.376 [2024-04-24 22:03:43.508142] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:01.376 [2024-04-24 22:03:43.509106] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:01.376 [2024-04-24 22:03:43.509127] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:01.376 [2024-04-24 22:03:43.510110] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:01.376 [2024-04-24 22:03:43.510131] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:01.376 [2024-04-24 22:03:43.510141] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:01.376 [2024-04-24 22:03:43.510154] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:01.376 [2024-04-24 22:03:43.510266] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:01.376 [2024-04-24 22:03:43.510281] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:01.376 [2024-04-24 22:03:43.510292] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:01.376 [2024-04-24 22:03:43.511124] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:01.376 [2024-04-24 22:03:43.512126] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:01.376 [2024-04-24 22:03:43.513134] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:01.376 [2024-04-24 22:03:43.514131] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:01.376 [2024-04-24 22:03:43.514239] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:01.376 [2024-04-24 22:03:43.515150] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:01.376 [2024-04-24 22:03:43.515170] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:01.376 [2024-04-24 22:03:43.515181] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:01.376 [2024-04-24 22:03:43.515208] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:01.376 [2024-04-24 22:03:43.515229] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:01.376 [2024-04-24 22:03:43.515263] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:01.376 [2024-04-24 22:03:43.515274] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:01.376 [2024-04-24 22:03:43.515301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:01.376 [2024-04-24 22:03:43.515384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:01.376 [2024-04-24 22:03:43.515413] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:01.376 [2024-04-24 22:03:43.515424] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:01.376 [2024-04-24 22:03:43.515433] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:01.376 [2024-04-24 22:03:43.515452] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:01.376 [2024-04-24 22:03:43.515461] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:01.376 [2024-04-24 22:03:43.515470] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:01.376 [2024-04-24 22:03:43.515479] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:01.376 [2024-04-24 22:03:43.515494] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:01.376 [2024-04-24 22:03:43.515511] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:01.376 [2024-04-24 22:03:43.515530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:01.376 [2024-04-24 22:03:43.515562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.376 [2024-04-24 22:03:43.515588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.376 [2024-04-24 22:03:43.515601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.376 [2024-04-24 22:03:43.515614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.376 [2024-04-24 22:03:43.515623] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:01.376 [2024-04-24 22:03:43.515641] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:01.376 [2024-04-24 22:03:43.515657] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:01.376 [2024-04-24 22:03:43.515670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:01.376 [2024-04-24 22:03:43.515683] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:01.376 [2024-04-24 22:03:43.515693] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:01.376 [2024-04-24 22:03:43.515710] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:01.376 [2024-04-24 22:03:43.515722] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:01.376 [2024-04-24 22:03:43.515737] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:01.376 [2024-04-24 22:03:43.515756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:01.376 [2024-04-24 22:03:43.515815] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:01.376 [2024-04-24 22:03:43.515832] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:01.376 [2024-04-24 22:03:43.515848] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:01.376 [2024-04-24 22:03:43.515858] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:01.376 [2024-04-24 22:03:43.515869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:01.376 [2024-04-24 22:03:43.515889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:01.376 [2024-04-24 22:03:43.515910] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:01.376 [2024-04-24 22:03:43.515929] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:01.376 [2024-04-24 22:03:43.515945] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:01.376 [2024-04-24 22:03:43.515958] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:01.376 [2024-04-24 22:03:43.515967] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:01.376 [2024-04-24 22:03:43.515982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:01.376 [2024-04-24 22:03:43.516006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:01.377 [2024-04-24 22:03:43.516033] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:01.377 [2024-04-24 22:03:43.516049] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:01.377 [2024-04-24 22:03:43.516063] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:01.377 [2024-04-24 22:03:43.516072] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:01.377 [2024-04-24 22:03:43.516084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:01.377 [2024-04-24 22:03:43.516096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:01.377 [2024-04-24 22:03:43.516114] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:01.377 [2024-04-24 22:03:43.516126] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:01.377 [2024-04-24 22:03:43.516142] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:01.377 [2024-04-24 22:03:43.516155] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:01.377 [2024-04-24 22:03:43.516164] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:01.377 [2024-04-24 22:03:43.516174] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:01.377 [2024-04-24 22:03:43.516183] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:01.377 [2024-04-24 22:03:43.516192] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:01.377 [2024-04-24 22:03:43.516223] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:01.377 [2024-04-24 22:03:43.516245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:01.377 [2024-04-24 22:03:43.516266] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:01.377 [2024-04-24 22:03:43.516279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:01.377 [2024-04-24 22:03:43.516297] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:01.377 [2024-04-24 22:03:43.516309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:01.377 [2024-04-24 22:03:43.516327] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:01.377 [2024-04-24 22:03:43.516345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:01.377 [2024-04-24 22:03:43.516367] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:01.377 [2024-04-24 22:03:43.516380] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:01.377 [2024-04-24 22:03:43.516388] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:01.377 [2024-04-24 22:03:43.516406] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:01.377 [2024-04-24 22:03:43.516419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:01.377 [2024-04-24 22:03:43.516433] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:01.377 [2024-04-24 22:03:43.516442] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:01.377 [2024-04-24 22:03:43.516452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:01.377 [2024-04-24 22:03:43.516465] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:01.377 [2024-04-24 22:03:43.516473] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:01.377 [2024-04-24 22:03:43.516483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:01.377 [2024-04-24 22:03:43.516497] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:01.377 [2024-04-24 22:03:43.516506] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:01.377 [2024-04-24 22:03:43.516521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:01.377 [2024-04-24 22:03:43.516534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:01.377 [2024-04-24 22:03:43.516558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:01.377 [2024-04-24 22:03:43.516576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:01.377 [2024-04-24 22:03:43.516589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:01.377 ===================================================== 00:12:01.377 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:01.377 ===================================================== 00:12:01.377 Controller Capabilities/Features 00:12:01.377 ================================ 00:12:01.377 Vendor ID: 4e58 00:12:01.377 Subsystem Vendor ID: 4e58 00:12:01.377 Serial Number: SPDK1 00:12:01.377 Model Number: SPDK bdev Controller 00:12:01.377 Firmware Version: 24.05 00:12:01.377 Recommended Arb Burst: 6 00:12:01.377 IEEE OUI Identifier: 8d 6b 50 00:12:01.377 Multi-path I/O 00:12:01.377 May have multiple subsystem ports: Yes 00:12:01.377 May have multiple controllers: Yes 00:12:01.377 Associated with SR-IOV VF: No 00:12:01.377 Max Data Transfer Size: 131072 00:12:01.377 Max Number of Namespaces: 32 00:12:01.377 Max Number of I/O Queues: 127 00:12:01.377 NVMe Specification Version (VS): 1.3 00:12:01.377 NVMe Specification Version (Identify): 1.3 00:12:01.377 Maximum Queue Entries: 256 00:12:01.377 Contiguous Queues Required: Yes 00:12:01.377 Arbitration Mechanisms Supported 00:12:01.377 Weighted Round Robin: Not Supported 00:12:01.377 Vendor Specific: Not Supported 00:12:01.377 Reset Timeout: 15000 ms 00:12:01.377 Doorbell Stride: 4 bytes 00:12:01.377 NVM Subsystem Reset: Not Supported 00:12:01.377 Command Sets Supported 00:12:01.377 NVM Command Set: Supported 00:12:01.377 Boot Partition: Not Supported 00:12:01.377 Memory Page Size Minimum: 4096 bytes 00:12:01.377 Memory Page Size Maximum: 4096 bytes 00:12:01.377 Persistent Memory Region: Not Supported 00:12:01.377 Optional Asynchronous Events Supported 00:12:01.377 Namespace Attribute Notices: Supported 00:12:01.377 Firmware Activation Notices: Not Supported 00:12:01.377 ANA Change Notices: Not Supported 00:12:01.377 PLE Aggregate Log Change Notices: Not Supported 00:12:01.377 LBA Status Info Alert Notices: Not Supported 00:12:01.377 EGE Aggregate Log Change Notices: Not Supported 00:12:01.377 Normal NVM Subsystem Shutdown event: Not Supported 00:12:01.377 Zone Descriptor Change Notices: Not Supported 00:12:01.377 Discovery Log Change Notices: Not Supported 00:12:01.377 Controller Attributes 00:12:01.377 128-bit Host Identifier: Supported 00:12:01.377 Non-Operational Permissive Mode: Not Supported 00:12:01.377 NVM Sets: Not Supported 00:12:01.377 Read Recovery Levels: Not Supported 00:12:01.377 Endurance Groups: Not Supported 00:12:01.377 Predictable Latency Mode: Not Supported 00:12:01.377 Traffic Based Keep ALive: Not Supported 00:12:01.377 Namespace Granularity: Not Supported 00:12:01.377 SQ Associations: Not Supported 00:12:01.377 UUID List: Not Supported 00:12:01.377 Multi-Domain Subsystem: Not Supported 00:12:01.377 Fixed Capacity Management: Not Supported 00:12:01.377 Variable Capacity Management: Not Supported 00:12:01.377 Delete Endurance Group: Not Supported 00:12:01.377 Delete NVM Set: Not Supported 00:12:01.377 Extended LBA Formats Supported: Not Supported 00:12:01.377 Flexible Data Placement Supported: Not Supported 00:12:01.377 00:12:01.377 Controller Memory Buffer Support 00:12:01.377 ================================ 00:12:01.377 Supported: No 00:12:01.377 00:12:01.377 Persistent Memory Region Support 00:12:01.377 ================================ 00:12:01.377 Supported: No 00:12:01.377 00:12:01.377 Admin Command Set Attributes 00:12:01.377 ============================ 00:12:01.377 Security Send/Receive: Not Supported 00:12:01.377 Format NVM: Not Supported 00:12:01.377 Firmware Activate/Download: Not Supported 00:12:01.377 Namespace Management: Not Supported 00:12:01.377 Device Self-Test: Not Supported 00:12:01.377 Directives: Not Supported 00:12:01.377 NVMe-MI: Not Supported 00:12:01.377 Virtualization Management: Not Supported 00:12:01.377 Doorbell Buffer Config: Not Supported 00:12:01.377 Get LBA Status Capability: Not Supported 00:12:01.378 Command & Feature Lockdown Capability: Not Supported 00:12:01.378 Abort Command Limit: 4 00:12:01.378 Async Event Request Limit: 4 00:12:01.378 Number of Firmware Slots: N/A 00:12:01.378 Firmware Slot 1 Read-Only: N/A 00:12:01.378 Firmware Activation Without Reset: N/A 00:12:01.378 Multiple Update Detection Support: N/A 00:12:01.378 Firmware Update Granularity: No Information Provided 00:12:01.378 Per-Namespace SMART Log: No 00:12:01.378 Asymmetric Namespace Access Log Page: Not Supported 00:12:01.378 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:01.378 Command Effects Log Page: Supported 00:12:01.378 Get Log Page Extended Data: Supported 00:12:01.378 Telemetry Log Pages: Not Supported 00:12:01.378 Persistent Event Log Pages: Not Supported 00:12:01.378 Supported Log Pages Log Page: May Support 00:12:01.378 Commands Supported & Effects Log Page: Not Supported 00:12:01.378 Feature Identifiers & Effects Log Page:May Support 00:12:01.378 NVMe-MI Commands & Effects Log Page: May Support 00:12:01.378 Data Area 4 for Telemetry Log: Not Supported 00:12:01.378 Error Log Page Entries Supported: 128 00:12:01.378 Keep Alive: Supported 00:12:01.378 Keep Alive Granularity: 10000 ms 00:12:01.378 00:12:01.378 NVM Command Set Attributes 00:12:01.378 ========================== 00:12:01.378 Submission Queue Entry Size 00:12:01.378 Max: 64 00:12:01.378 Min: 64 00:12:01.378 Completion Queue Entry Size 00:12:01.378 Max: 16 00:12:01.378 Min: 16 00:12:01.378 Number of Namespaces: 32 00:12:01.378 Compare Command: Supported 00:12:01.378 Write Uncorrectable Command: Not Supported 00:12:01.378 Dataset Management Command: Supported 00:12:01.378 Write Zeroes Command: Supported 00:12:01.378 Set Features Save Field: Not Supported 00:12:01.378 Reservations: Not Supported 00:12:01.378 Timestamp: Not Supported 00:12:01.378 Copy: Supported 00:12:01.378 Volatile Write Cache: Present 00:12:01.378 Atomic Write Unit (Normal): 1 00:12:01.378 Atomic Write Unit (PFail): 1 00:12:01.378 Atomic Compare & Write Unit: 1 00:12:01.378 Fused Compare & Write: Supported 00:12:01.378 Scatter-Gather List 00:12:01.378 SGL Command Set: Supported (Dword aligned) 00:12:01.378 SGL Keyed: Not Supported 00:12:01.378 SGL Bit Bucket Descriptor: Not Supported 00:12:01.378 SGL Metadata Pointer: Not Supported 00:12:01.378 Oversized SGL: Not Supported 00:12:01.378 SGL Metadata Address: Not Supported 00:12:01.378 SGL Offset: Not Supported 00:12:01.378 Transport SGL Data Block: Not Supported 00:12:01.378 Replay Protected Memory Block: Not Supported 00:12:01.378 00:12:01.378 Firmware Slot Information 00:12:01.378 ========================= 00:12:01.378 Active slot: 1 00:12:01.378 Slot 1 Firmware Revision: 24.05 00:12:01.378 00:12:01.378 00:12:01.378 Commands Supported and Effects 00:12:01.378 ============================== 00:12:01.378 Admin Commands 00:12:01.378 -------------- 00:12:01.378 Get Log Page (02h): Supported 00:12:01.378 Identify (06h): Supported 00:12:01.378 Abort (08h): Supported 00:12:01.378 Set Features (09h): Supported 00:12:01.378 Get Features (0Ah): Supported 00:12:01.378 Asynchronous Event Request (0Ch): Supported 00:12:01.378 Keep Alive (18h): Supported 00:12:01.378 I/O Commands 00:12:01.378 ------------ 00:12:01.378 Flush (00h): Supported LBA-Change 00:12:01.378 Write (01h): Supported LBA-Change 00:12:01.378 Read (02h): Supported 00:12:01.378 Compare (05h): Supported 00:12:01.378 Write Zeroes (08h): Supported LBA-Change 00:12:01.378 Dataset Management (09h): Supported LBA-Change 00:12:01.378 Copy (19h): Supported LBA-Change 00:12:01.378 Unknown (79h): Supported LBA-Change 00:12:01.378 Unknown (7Ah): Supported 00:12:01.378 00:12:01.378 Error Log 00:12:01.378 ========= 00:12:01.378 00:12:01.378 Arbitration 00:12:01.378 =========== 00:12:01.378 Arbitration Burst: 1 00:12:01.378 00:12:01.378 Power Management 00:12:01.378 ================ 00:12:01.378 Number of Power States: 1 00:12:01.378 Current Power State: Power State #0 00:12:01.378 Power State #0: 00:12:01.378 Max Power: 0.00 W 00:12:01.378 Non-Operational State: Operational 00:12:01.378 Entry Latency: Not Reported 00:12:01.378 Exit Latency: Not Reported 00:12:01.378 Relative Read Throughput: 0 00:12:01.378 Relative Read Latency: 0 00:12:01.378 Relative Write Throughput: 0 00:12:01.378 Relative Write Latency: 0 00:12:01.378 Idle Power: Not Reported 00:12:01.378 Active Power: Not Reported 00:12:01.378 Non-Operational Permissive Mode: Not Supported 00:12:01.378 00:12:01.378 Health Information 00:12:01.378 ================== 00:12:01.378 Critical Warnings: 00:12:01.378 Available Spare Space: OK 00:12:01.378 Temperature: OK 00:12:01.378 Device Reliability: OK 00:12:01.378 Read Only: No 00:12:01.378 Volatile Memory Backup: OK 00:12:01.378 Current Temperature: 0 Kelvin (-2[2024-04-24 22:03:43.516741] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:01.378 [2024-04-24 22:03:43.516760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:01.378 [2024-04-24 22:03:43.516806] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:01.378 [2024-04-24 22:03:43.516826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.378 [2024-04-24 22:03:43.516838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.378 [2024-04-24 22:03:43.516850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.378 [2024-04-24 22:03:43.516860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.378 [2024-04-24 22:03:43.519406] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:01.378 [2024-04-24 22:03:43.519432] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:01.378 [2024-04-24 22:03:43.520180] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:01.378 [2024-04-24 22:03:43.520261] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:01.378 [2024-04-24 22:03:43.520282] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:01.378 [2024-04-24 22:03:43.521194] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:01.378 [2024-04-24 22:03:43.521222] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:01.378 [2024-04-24 22:03:43.521291] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:01.378 [2024-04-24 22:03:43.524409] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:01.378 73 Celsius) 00:12:01.378 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:01.378 Available Spare: 0% 00:12:01.378 Available Spare Threshold: 0% 00:12:01.378 Life Percentage Used: 0% 00:12:01.378 Data Units Read: 0 00:12:01.378 Data Units Written: 0 00:12:01.378 Host Read Commands: 0 00:12:01.378 Host Write Commands: 0 00:12:01.378 Controller Busy Time: 0 minutes 00:12:01.378 Power Cycles: 0 00:12:01.378 Power On Hours: 0 hours 00:12:01.378 Unsafe Shutdowns: 0 00:12:01.378 Unrecoverable Media Errors: 0 00:12:01.378 Lifetime Error Log Entries: 0 00:12:01.378 Warning Temperature Time: 0 minutes 00:12:01.378 Critical Temperature Time: 0 minutes 00:12:01.378 00:12:01.378 Number of Queues 00:12:01.378 ================ 00:12:01.378 Number of I/O Submission Queues: 127 00:12:01.378 Number of I/O Completion Queues: 127 00:12:01.378 00:12:01.378 Active Namespaces 00:12:01.378 ================= 00:12:01.378 Namespace ID:1 00:12:01.378 Error Recovery Timeout: Unlimited 00:12:01.378 Command Set Identifier: NVM (00h) 00:12:01.378 Deallocate: Supported 00:12:01.378 Deallocated/Unwritten Error: Not Supported 00:12:01.378 Deallocated Read Value: Unknown 00:12:01.378 Deallocate in Write Zeroes: Not Supported 00:12:01.378 Deallocated Guard Field: 0xFFFF 00:12:01.378 Flush: Supported 00:12:01.378 Reservation: Supported 00:12:01.378 Namespace Sharing Capabilities: Multiple Controllers 00:12:01.378 Size (in LBAs): 131072 (0GiB) 00:12:01.378 Capacity (in LBAs): 131072 (0GiB) 00:12:01.378 Utilization (in LBAs): 131072 (0GiB) 00:12:01.378 NGUID: B5F9C716840845BA86216137599628A1 00:12:01.378 UUID: b5f9c716-8408-45ba-8621-6137599628a1 00:12:01.378 Thin Provisioning: Not Supported 00:12:01.378 Per-NS Atomic Units: Yes 00:12:01.379 Atomic Boundary Size (Normal): 0 00:12:01.379 Atomic Boundary Size (PFail): 0 00:12:01.379 Atomic Boundary Offset: 0 00:12:01.379 Maximum Single Source Range Length: 65535 00:12:01.379 Maximum Copy Length: 65535 00:12:01.379 Maximum Source Range Count: 1 00:12:01.379 NGUID/EUI64 Never Reused: No 00:12:01.379 Namespace Write Protected: No 00:12:01.379 Number of LBA Formats: 1 00:12:01.379 Current LBA Format: LBA Format #00 00:12:01.379 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:01.379 00:12:01.379 22:03:43 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:01.379 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.636 [2024-04-24 22:03:43.776673] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:06.901 [2024-04-24 22:03:48.797408] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:06.901 Initializing NVMe Controllers 00:12:06.901 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:06.902 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:06.902 Initialization complete. Launching workers. 00:12:06.902 ======================================================== 00:12:06.902 Latency(us) 00:12:06.902 Device Information : IOPS MiB/s Average min max 00:12:06.902 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 25329.50 98.94 5052.49 1388.90 8951.28 00:12:06.902 ======================================================== 00:12:06.902 Total : 25329.50 98.94 5052.49 1388.90 8951.28 00:12:06.902 00:12:06.902 22:03:48 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:06.902 EAL: No free 2048 kB hugepages reported on node 1 00:12:06.902 [2024-04-24 22:03:49.054641] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:12.168 [2024-04-24 22:03:54.097336] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:12.168 Initializing NVMe Controllers 00:12:12.168 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:12.168 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:12.168 Initialization complete. Launching workers. 00:12:12.168 ======================================================== 00:12:12.168 Latency(us) 00:12:12.168 Device Information : IOPS MiB/s Average min max 00:12:12.168 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16050.77 62.70 7979.72 7705.31 8157.87 00:12:12.168 ======================================================== 00:12:12.168 Total : 16050.77 62.70 7979.72 7705.31 8157.87 00:12:12.168 00:12:12.168 22:03:54 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:12.168 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.168 [2024-04-24 22:03:54.327518] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:17.437 [2024-04-24 22:03:59.414875] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:17.437 Initializing NVMe Controllers 00:12:17.437 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:17.437 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:17.437 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:17.437 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:17.437 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:17.437 Initialization complete. Launching workers. 00:12:17.437 Starting thread on core 2 00:12:17.437 Starting thread on core 3 00:12:17.437 Starting thread on core 1 00:12:17.437 22:03:59 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:17.437 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.696 [2024-04-24 22:03:59.782965] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:20.984 [2024-04-24 22:04:02.929700] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:20.984 Initializing NVMe Controllers 00:12:20.984 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:20.984 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:20.984 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:20.984 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:20.984 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:20.984 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:20.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:20.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:20.984 Initialization complete. Launching workers. 00:12:20.984 Starting thread on core 1 with urgent priority queue 00:12:20.984 Starting thread on core 2 with urgent priority queue 00:12:20.984 Starting thread on core 3 with urgent priority queue 00:12:20.984 Starting thread on core 0 with urgent priority queue 00:12:20.984 SPDK bdev Controller (SPDK1 ) core 0: 1850.33 IO/s 54.04 secs/100000 ios 00:12:20.984 SPDK bdev Controller (SPDK1 ) core 1: 1540.67 IO/s 64.91 secs/100000 ios 00:12:20.984 SPDK bdev Controller (SPDK1 ) core 2: 1969.00 IO/s 50.79 secs/100000 ios 00:12:20.984 SPDK bdev Controller (SPDK1 ) core 3: 1937.00 IO/s 51.63 secs/100000 ios 00:12:20.984 ======================================================== 00:12:20.984 00:12:20.984 22:04:02 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:20.984 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.243 [2024-04-24 22:04:03.255514] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:21.243 [2024-04-24 22:04:03.289244] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:21.243 Initializing NVMe Controllers 00:12:21.243 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:21.243 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:21.243 Namespace ID: 1 size: 0GB 00:12:21.243 Initialization complete. 00:12:21.243 INFO: using host memory buffer for IO 00:12:21.243 Hello world! 00:12:21.243 22:04:03 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:21.243 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.501 [2024-04-24 22:04:03.611921] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:22.438 Initializing NVMe Controllers 00:12:22.438 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:22.438 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:22.438 Initialization complete. Launching workers. 00:12:22.438 submit (in ns) avg, min, max = 9928.2, 4177.8, 4029509.6 00:12:22.438 complete (in ns) avg, min, max = 26272.8, 2468.1, 4005481.5 00:12:22.438 00:12:22.438 Submit histogram 00:12:22.438 ================ 00:12:22.438 Range in us Cumulative Count 00:12:22.438 4.172 - 4.196: 0.0084% ( 1) 00:12:22.438 4.196 - 4.219: 0.0919% ( 10) 00:12:22.438 4.219 - 4.243: 0.4596% ( 44) 00:12:22.438 4.243 - 4.267: 1.6714% ( 145) 00:12:22.438 4.267 - 4.290: 4.5796% ( 348) 00:12:22.438 4.290 - 4.314: 10.2791% ( 682) 00:12:22.438 4.314 - 4.338: 17.1319% ( 820) 00:12:22.438 4.338 - 4.361: 25.8733% ( 1046) 00:12:22.438 4.361 - 4.385: 34.3473% ( 1014) 00:12:22.438 4.385 - 4.409: 41.2669% ( 828) 00:12:22.438 4.409 - 4.433: 47.0249% ( 689) 00:12:22.438 4.433 - 4.456: 49.6406% ( 313) 00:12:22.438 4.456 - 4.480: 51.0196% ( 165) 00:12:22.438 4.480 - 4.504: 52.6910% ( 200) 00:12:22.438 4.504 - 4.527: 55.6243% ( 351) 00:12:22.438 4.527 - 4.551: 59.7610% ( 495) 00:12:22.438 4.551 - 4.575: 65.1930% ( 650) 00:12:22.438 4.575 - 4.599: 70.0568% ( 582) 00:12:22.438 4.599 - 4.622: 73.4163% ( 402) 00:12:22.438 4.622 - 4.646: 75.3385% ( 230) 00:12:22.438 4.646 - 4.670: 76.4332% ( 131) 00:12:22.438 4.670 - 4.693: 77.1185% ( 82) 00:12:22.438 4.693 - 4.717: 77.5029% ( 46) 00:12:22.438 4.717 - 4.741: 77.7286% ( 27) 00:12:22.438 4.741 - 4.764: 78.1798% ( 54) 00:12:22.438 4.764 - 4.788: 78.8985% ( 86) 00:12:22.438 4.788 - 4.812: 79.1075% ( 25) 00:12:22.438 4.812 - 4.836: 79.2078% ( 12) 00:12:22.438 4.836 - 4.859: 79.2997% ( 11) 00:12:22.438 4.859 - 4.883: 79.3331% ( 4) 00:12:22.438 4.883 - 4.907: 79.3916% ( 7) 00:12:22.438 4.907 - 4.930: 80.0518% ( 79) 00:12:22.438 4.930 - 4.954: 84.2220% ( 499) 00:12:22.438 4.954 - 4.978: 93.9746% ( 1167) 00:12:22.438 4.978 - 5.001: 96.1474% ( 260) 00:12:22.438 5.001 - 5.025: 96.5486% ( 48) 00:12:22.438 5.025 - 5.049: 96.6906% ( 17) 00:12:22.438 5.049 - 5.073: 96.7408% ( 6) 00:12:22.438 5.073 - 5.096: 96.8160% ( 9) 00:12:22.438 5.096 - 5.120: 96.8912% ( 9) 00:12:22.438 5.120 - 5.144: 96.9915% ( 12) 00:12:22.438 5.144 - 5.167: 97.0667% ( 9) 00:12:22.438 5.167 - 5.191: 97.1419% ( 9) 00:12:22.438 5.191 - 5.215: 97.2422% ( 12) 00:12:22.438 5.215 - 5.239: 97.4260% ( 22) 00:12:22.438 5.239 - 5.262: 97.6183% ( 23) 00:12:22.438 5.262 - 5.286: 97.6684% ( 6) 00:12:22.438 5.286 - 5.310: 97.6935% ( 3) 00:12:22.438 5.310 - 5.333: 97.7269% ( 4) 00:12:22.438 5.333 - 5.357: 97.7436% ( 2) 00:12:22.438 5.357 - 5.381: 97.7687% ( 3) 00:12:22.438 5.381 - 5.404: 97.7854% ( 2) 00:12:22.438 5.404 - 5.428: 97.7937% ( 1) 00:12:22.438 5.428 - 5.452: 97.8105% ( 2) 00:12:22.438 5.452 - 5.476: 97.8272% ( 2) 00:12:22.438 5.476 - 5.499: 97.8773% ( 6) 00:12:22.438 5.499 - 5.523: 97.9191% ( 5) 00:12:22.438 5.523 - 5.547: 97.9692% ( 6) 00:12:22.438 5.547 - 5.570: 98.0027% ( 4) 00:12:22.438 5.570 - 5.594: 98.0110% ( 1) 00:12:22.438 5.594 - 5.618: 98.0361% ( 3) 00:12:22.438 5.618 - 5.641: 98.0528% ( 2) 00:12:22.438 5.641 - 5.665: 98.0695% ( 2) 00:12:22.438 5.665 - 5.689: 98.0946% ( 3) 00:12:22.438 5.689 - 5.713: 98.1030% ( 1) 00:12:22.438 5.713 - 5.736: 98.1113% ( 1) 00:12:22.438 5.736 - 5.760: 98.1531% ( 5) 00:12:22.438 5.760 - 5.784: 98.1698% ( 2) 00:12:22.438 5.784 - 5.807: 98.1865% ( 2) 00:12:22.438 5.807 - 5.831: 98.1949% ( 1) 00:12:22.438 5.879 - 5.902: 98.2032% ( 1) 00:12:22.438 5.926 - 5.950: 98.2200% ( 2) 00:12:22.438 5.950 - 5.973: 98.2367% ( 2) 00:12:22.438 5.973 - 5.997: 98.2617% ( 3) 00:12:22.438 5.997 - 6.021: 98.2701% ( 1) 00:12:22.438 6.021 - 6.044: 98.2785% ( 1) 00:12:22.438 6.068 - 6.116: 98.2952% ( 2) 00:12:22.438 6.210 - 6.258: 98.3035% ( 1) 00:12:22.438 6.258 - 6.305: 98.3202% ( 2) 00:12:22.438 6.305 - 6.353: 98.3370% ( 2) 00:12:22.438 6.400 - 6.447: 98.3453% ( 1) 00:12:22.438 6.495 - 6.542: 98.3537% ( 1) 00:12:22.438 6.590 - 6.637: 98.3704% ( 2) 00:12:22.438 6.684 - 6.732: 98.3787% ( 1) 00:12:22.438 6.732 - 6.779: 98.4038% ( 3) 00:12:22.438 6.827 - 6.874: 98.4122% ( 1) 00:12:22.438 6.874 - 6.921: 98.4289% ( 2) 00:12:22.438 7.064 - 7.111: 98.4372% ( 1) 00:12:22.438 7.206 - 7.253: 98.4456% ( 1) 00:12:22.438 7.396 - 7.443: 98.4540% ( 1) 00:12:22.438 7.538 - 7.585: 98.4623% ( 1) 00:12:22.438 7.585 - 7.633: 98.4707% ( 1) 00:12:22.438 7.727 - 7.775: 98.4790% ( 1) 00:12:22.438 7.822 - 7.870: 98.4874% ( 1) 00:12:22.438 7.870 - 7.917: 98.4957% ( 1) 00:12:22.438 7.917 - 7.964: 98.5125% ( 2) 00:12:22.438 8.059 - 8.107: 98.5208% ( 1) 00:12:22.438 8.107 - 8.154: 98.5375% ( 2) 00:12:22.438 8.154 - 8.201: 98.5542% ( 2) 00:12:22.438 8.296 - 8.344: 98.5626% ( 1) 00:12:22.438 8.344 - 8.391: 98.5710% ( 1) 00:12:22.438 8.391 - 8.439: 98.5877% ( 2) 00:12:22.438 8.486 - 8.533: 98.5960% ( 1) 00:12:22.438 8.533 - 8.581: 98.6044% ( 1) 00:12:22.438 8.581 - 8.628: 98.6127% ( 1) 00:12:22.438 8.676 - 8.723: 98.6211% ( 1) 00:12:22.438 8.723 - 8.770: 98.6295% ( 1) 00:12:22.438 8.770 - 8.818: 98.6378% ( 1) 00:12:22.438 8.818 - 8.865: 98.6462% ( 1) 00:12:22.438 8.865 - 8.913: 98.6545% ( 1) 00:12:22.438 8.913 - 8.960: 98.6963% ( 5) 00:12:22.438 8.960 - 9.007: 98.7130% ( 2) 00:12:22.438 9.007 - 9.055: 98.7297% ( 2) 00:12:22.438 9.055 - 9.102: 98.7381% ( 1) 00:12:22.438 9.102 - 9.150: 98.7464% ( 1) 00:12:22.438 9.197 - 9.244: 98.7632% ( 2) 00:12:22.438 9.244 - 9.292: 98.7799% ( 2) 00:12:22.438 9.292 - 9.339: 98.8049% ( 3) 00:12:22.438 9.339 - 9.387: 98.8300% ( 3) 00:12:22.438 9.387 - 9.434: 98.8384% ( 1) 00:12:22.438 9.481 - 9.529: 98.8551% ( 2) 00:12:22.438 9.529 - 9.576: 98.8718% ( 2) 00:12:22.438 9.576 - 9.624: 98.8885% ( 2) 00:12:22.438 9.671 - 9.719: 98.8969% ( 1) 00:12:22.438 9.719 - 9.766: 98.9136% ( 2) 00:12:22.438 9.766 - 9.813: 98.9219% ( 1) 00:12:22.438 9.861 - 9.908: 98.9470% ( 3) 00:12:22.438 9.908 - 9.956: 98.9554% ( 1) 00:12:22.438 10.050 - 10.098: 98.9637% ( 1) 00:12:22.438 10.098 - 10.145: 98.9721% ( 1) 00:12:22.438 10.193 - 10.240: 98.9972% ( 3) 00:12:22.438 10.287 - 10.335: 99.0139% ( 2) 00:12:22.438 10.335 - 10.382: 99.0389% ( 3) 00:12:22.438 10.382 - 10.430: 99.0473% ( 1) 00:12:22.438 10.430 - 10.477: 99.0557% ( 1) 00:12:22.438 10.477 - 10.524: 99.0724% ( 2) 00:12:22.438 10.572 - 10.619: 99.0891% ( 2) 00:12:22.438 10.619 - 10.667: 99.1058% ( 2) 00:12:22.438 10.667 - 10.714: 99.1142% ( 1) 00:12:22.438 10.714 - 10.761: 99.1225% ( 1) 00:12:22.438 10.761 - 10.809: 99.1476% ( 3) 00:12:22.438 10.809 - 10.856: 99.1559% ( 1) 00:12:22.438 10.856 - 10.904: 99.1727% ( 2) 00:12:22.438 10.904 - 10.951: 99.1810% ( 1) 00:12:22.438 11.093 - 11.141: 99.1977% ( 2) 00:12:22.438 11.141 - 11.188: 99.2061% ( 1) 00:12:22.438 11.236 - 11.283: 99.2228% ( 2) 00:12:22.438 11.283 - 11.330: 99.2479% ( 3) 00:12:22.438 11.378 - 11.425: 99.2646% ( 2) 00:12:22.438 11.473 - 11.520: 99.2813% ( 2) 00:12:22.438 11.520 - 11.567: 99.2897% ( 1) 00:12:22.438 11.567 - 11.615: 99.2980% ( 1) 00:12:22.438 11.662 - 11.710: 99.3064% ( 1) 00:12:22.438 11.710 - 11.757: 99.3231% ( 2) 00:12:22.438 11.757 - 11.804: 99.3314% ( 1) 00:12:22.438 11.804 - 11.852: 99.3482% ( 2) 00:12:22.438 11.852 - 11.899: 99.3649% ( 2) 00:12:22.438 11.994 - 12.041: 99.3732% ( 1) 00:12:22.438 12.041 - 12.089: 99.3816% ( 1) 00:12:22.439 12.089 - 12.136: 99.3899% ( 1) 00:12:22.439 12.231 - 12.326: 99.4067% ( 2) 00:12:22.439 12.326 - 12.421: 99.4150% ( 1) 00:12:22.439 12.421 - 12.516: 99.4234% ( 1) 00:12:22.439 12.516 - 12.610: 99.4401% ( 2) 00:12:22.439 12.610 - 12.705: 99.4484% ( 1) 00:12:22.439 12.705 - 12.800: 99.4568% ( 1) 00:12:22.439 12.800 - 12.895: 99.4652% ( 1) 00:12:22.439 12.895 - 12.990: 99.4902% ( 3) 00:12:22.439 12.990 - 13.084: 99.5069% ( 2) 00:12:22.439 13.084 - 13.179: 99.5153% ( 1) 00:12:22.439 13.179 - 13.274: 99.5237% ( 1) 00:12:22.439 13.274 - 13.369: 99.5320% ( 1) 00:12:22.439 13.369 - 13.464: 99.5487% ( 2) 00:12:22.439 13.464 - 13.559: 99.5571% ( 1) 00:12:22.439 13.559 - 13.653: 99.5738% ( 2) 00:12:22.439 13.653 - 13.748: 99.5821% ( 1) 00:12:22.439 13.843 - 13.938: 99.5905% ( 1) 00:12:22.439 13.938 - 14.033: 99.6156% ( 3) 00:12:22.439 14.033 - 14.127: 99.6406% ( 3) 00:12:22.439 14.127 - 14.222: 99.6490% ( 1) 00:12:22.439 14.412 - 14.507: 99.6741% ( 3) 00:12:22.439 14.601 - 14.696: 99.6908% ( 2) 00:12:22.439 14.791 - 14.886: 99.7242% ( 4) 00:12:22.439 14.886 - 14.981: 99.7493% ( 3) 00:12:22.439 14.981 - 15.076: 99.7744% ( 3) 00:12:22.439 15.076 - 15.170: 99.7911% ( 2) 00:12:22.439 15.265 - 15.360: 99.8078% ( 2) 00:12:22.439 15.360 - 15.455: 99.8161% ( 1) 00:12:22.439 15.455 - 15.550: 99.8329% ( 2) 00:12:22.439 15.739 - 15.834: 99.8412% ( 1) 00:12:22.439 16.119 - 16.213: 99.8496% ( 1) 00:12:22.439 16.403 - 16.498: 99.8579% ( 1) 00:12:22.439 18.489 - 18.584: 99.8663% ( 1) 00:12:22.439 3179.710 - 3203.982: 99.8746% ( 1) 00:12:22.439 3980.705 - 4004.978: 99.9833% ( 13) 00:12:22.439 4004.978 - 4029.250: 99.9916% ( 1) 00:12:22.439 4029.250 - 4053.523: 100.0000% ( 1) 00:12:22.439 00:12:22.439 Complete histogram 00:12:22.439 ================== 00:12:22.439 Range in us Cumulative Count 00:12:22.439 2.465 - 2.477: 0.5265% ( 63) 00:12:22.439 2.477 - 2.489: 6.4349% ( 707) 00:12:22.439 2.489 - 2.501: 15.5106% ( 1086) 00:12:22.439 2.501 - 2.513: 19.8395% ( 518) 00:12:22.439 2.513 - 2.524: 36.4199% ( 1984) 00:12:22.439 2.524 - 2.536: 69.7810% ( 3992) 00:12:22.439 2.536 - 2.548: 88.3336% ( 2220) 00:12:22.439 2.548 - 2.560: 92.7378% ( 527) 00:12:22.439 2.560 - 2.572: 95.1362% ( 287) 00:12:22.439 2.572 - 2.584: 96.4566% ( 158) 00:12:22.439 2.584 - 2.596: 97.1503% ( 83) 00:12:22.439 2.596 - 2.607: 97.7770% ( 75) 00:12:22.439 2.607 - 2.619: 98.0695% ( 35) 00:12:22.439 2.619 - 2.631: 98.3119% ( 29) 00:12:22.439 2.631 - 2.643: 98.4122% ( 12) 00:12:22.439 2.643 - 2.655: 98.5459% ( 16) 00:12:22.439 2.655 - 2.667: 98.5793% ( 4) 00:12:22.439 2.667 - 2.679: 98.5960% ( 2) 00:12:22.439 2.679 - 2.690: 98.6127% ( 2) 00:12:22.439 2.690 - 2.702: 98.6295% ( 2) 00:12:22.439 2.726 - 2.738: 98.6462% ( 2) 00:12:22.439 2.738 - 2.750: 98.6629% ( 2) 00:12:22.439 2.750 - 2.761: 98.6796% ( 2) 00:12:22.439 2.761 - 2.773: 98.6879% ( 1) 00:12:22.439 2.773 - 2.785: 98.7047% ( 2) 00:12:22.439 2.797 - 2.809: 98.7297% ( 3) 00:12:22.439 2.856 - 2.868: 98.7381% ( 1) 00:12:22.439 2.880 - 2.892: 98.7464% ( 1) 00:12:22.439 2.892 - 2.904: 98.7548% ( 1) 00:12:22.439 2.916 - 2.927: 98.7715% ( 2) 00:12:22.439 2.927 - 2.939: 98.7799% ( 1) 00:12:22.439 2.939 - 2.951: 98.7882% ( 1) 00:12:22.439 2.963 - 2.975: 98.7966% ( 1) 00:12:22.439 2.999 - 3.010: 98.8049% ( 1) 00:12:22.439 3.200 - 3.224: 98.8133% ( 1) 00:12:22.439 3.224 - 3.247: 98.8384% ( 3) 00:12:22.439 3.247 - 3.271: 98.8634% ( 3) 00:12:22.439 3.271 - 3.295: 98.8802% ( 2) 00:12:22.439 3.295 - 3.319: 98.9219% ( 5) 00:12:22.439 3.319 - 3.342: 98.9387% ( 2) 00:12:22.439 3.342 - 3.366: 98.9721% ( 4) 00:12:22.439 3.366 - 3.390: 99.0139% ( 5) 00:12:22.439 3.413 - 3.437: 99.0389% ( 3) 00:12:22.439 3.437 - 3.461: 99.0473% ( 1) 00:12:22.439 3.461 - 3.484: 99.0640% ( 2) 00:12:22.439 3.484 - 3.508: 99.0724% ( 1) 00:12:22.439 3.508 - 3.532: 99.0891% ( 2) 00:12:22.439 3.532 - 3.556: 99.0974% ( 1) 00:12:22.439 3.603 - 3.627: 99.1058% ( 1) 00:12:22.439 4.290 - 4.314: 99.1142% ( 1) 00:12:22.439 4.433 - 4.456: 99.1225% ( 1) 00:12:22.439 4.480 - 4.504: 99.1309% ( 1) 00:12:22.439 5.452 - 5.476: 99.1392% ( 1) 00:12:22.439 6.400 - 6.447: 99.1476% ( 1) 00:12:22.439 6.732 - 6.779: 9[2024-04-24 22:04:04.631511] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:22.439 9.1559% ( 1) 00:12:22.439 6.779 - 6.827: 99.1727% ( 2) 00:12:22.439 7.253 - 7.301: 99.1810% ( 1) 00:12:22.439 7.301 - 7.348: 99.1894% ( 1) 00:12:22.439 7.348 - 7.396: 99.1977% ( 1) 00:12:22.439 7.443 - 7.490: 99.2061% ( 1) 00:12:22.439 7.490 - 7.538: 99.2144% ( 1) 00:12:22.439 7.585 - 7.633: 99.2228% ( 1) 00:12:22.439 7.822 - 7.870: 99.2395% ( 2) 00:12:22.439 7.964 - 8.012: 99.2562% ( 2) 00:12:22.439 8.012 - 8.059: 99.2646% ( 1) 00:12:22.439 8.059 - 8.107: 99.2729% ( 1) 00:12:22.439 8.107 - 8.154: 99.2813% ( 1) 00:12:22.439 8.201 - 8.249: 99.2897% ( 1) 00:12:22.439 8.676 - 8.723: 99.2980% ( 1) 00:12:22.439 8.913 - 8.960: 99.3064% ( 1) 00:12:22.439 9.102 - 9.150: 99.3147% ( 1) 00:12:22.439 9.150 - 9.197: 99.3231% ( 1) 00:12:22.439 9.197 - 9.244: 99.3314% ( 1) 00:12:22.439 9.481 - 9.529: 99.3398% ( 1) 00:12:22.439 9.908 - 9.956: 99.3482% ( 1) 00:12:22.439 10.809 - 10.856: 99.3565% ( 1) 00:12:22.439 11.852 - 11.899: 99.3649% ( 1) 00:12:22.439 12.326 - 12.421: 99.3732% ( 1) 00:12:22.439 13.748 - 13.843: 99.3816% ( 1) 00:12:22.439 14.696 - 14.791: 99.3983% ( 2) 00:12:22.439 16.024 - 16.119: 99.4067% ( 1) 00:12:22.439 3980.705 - 4004.978: 99.9916% ( 70) 00:12:22.439 4004.978 - 4029.250: 100.0000% ( 1) 00:12:22.439 00:12:22.439 22:04:04 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:22.439 22:04:04 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:22.439 22:04:04 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:22.439 22:04:04 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:22.439 22:04:04 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:23.006 [2024-04-24 22:04:04.962942] nvmf_rpc.c: 276:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:12:23.006 [ 00:12:23.006 { 00:12:23.006 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:23.006 "subtype": "Discovery", 00:12:23.006 "listen_addresses": [], 00:12:23.006 "allow_any_host": true, 00:12:23.006 "hosts": [] 00:12:23.006 }, 00:12:23.006 { 00:12:23.006 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:23.006 "subtype": "NVMe", 00:12:23.006 "listen_addresses": [ 00:12:23.006 { 00:12:23.006 "transport": "VFIOUSER", 00:12:23.006 "trtype": "VFIOUSER", 00:12:23.006 "adrfam": "IPv4", 00:12:23.006 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:23.006 "trsvcid": "0" 00:12:23.006 } 00:12:23.006 ], 00:12:23.006 "allow_any_host": true, 00:12:23.006 "hosts": [], 00:12:23.006 "serial_number": "SPDK1", 00:12:23.006 "model_number": "SPDK bdev Controller", 00:12:23.006 "max_namespaces": 32, 00:12:23.006 "min_cntlid": 1, 00:12:23.006 "max_cntlid": 65519, 00:12:23.006 "namespaces": [ 00:12:23.006 { 00:12:23.006 "nsid": 1, 00:12:23.006 "bdev_name": "Malloc1", 00:12:23.006 "name": "Malloc1", 00:12:23.006 "nguid": "B5F9C716840845BA86216137599628A1", 00:12:23.006 "uuid": "b5f9c716-8408-45ba-8621-6137599628a1" 00:12:23.006 } 00:12:23.006 ] 00:12:23.006 }, 00:12:23.006 { 00:12:23.006 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:23.006 "subtype": "NVMe", 00:12:23.006 "listen_addresses": [ 00:12:23.006 { 00:12:23.006 "transport": "VFIOUSER", 00:12:23.006 "trtype": "VFIOUSER", 00:12:23.006 "adrfam": "IPv4", 00:12:23.006 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:23.006 "trsvcid": "0" 00:12:23.006 } 00:12:23.006 ], 00:12:23.006 "allow_any_host": true, 00:12:23.006 "hosts": [], 00:12:23.006 "serial_number": "SPDK2", 00:12:23.006 "model_number": "SPDK bdev Controller", 00:12:23.006 "max_namespaces": 32, 00:12:23.006 "min_cntlid": 1, 00:12:23.006 "max_cntlid": 65519, 00:12:23.006 "namespaces": [ 00:12:23.006 { 00:12:23.006 "nsid": 1, 00:12:23.006 "bdev_name": "Malloc2", 00:12:23.006 "name": "Malloc2", 00:12:23.006 "nguid": "B312EEC0128E42CABC6151D87062F72D", 00:12:23.006 "uuid": "b312eec0-128e-42ca-bc61-51d87062f72d" 00:12:23.006 } 00:12:23.006 ] 00:12:23.006 } 00:12:23.006 ] 00:12:23.006 22:04:04 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:23.006 22:04:04 -- target/nvmf_vfio_user.sh@34 -- # aerpid=3906148 00:12:23.006 22:04:04 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:23.006 22:04:04 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:23.006 22:04:04 -- common/autotest_common.sh@1251 -- # local i=0 00:12:23.006 22:04:04 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:23.006 22:04:04 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:23.006 22:04:04 -- common/autotest_common.sh@1262 -- # return 0 00:12:23.006 22:04:04 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:23.006 22:04:04 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:23.006 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.006 [2024-04-24 22:04:05.147886] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:23.006 Malloc3 00:12:23.264 22:04:05 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:23.522 [2024-04-24 22:04:05.552062] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:23.522 22:04:05 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:23.522 Asynchronous Event Request test 00:12:23.522 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:23.522 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:23.522 Registering asynchronous event callbacks... 00:12:23.522 Starting namespace attribute notice tests for all controllers... 00:12:23.522 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:23.522 aer_cb - Changed Namespace 00:12:23.522 Cleaning up... 00:12:23.780 [ 00:12:23.780 { 00:12:23.780 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:23.780 "subtype": "Discovery", 00:12:23.780 "listen_addresses": [], 00:12:23.780 "allow_any_host": true, 00:12:23.780 "hosts": [] 00:12:23.780 }, 00:12:23.780 { 00:12:23.781 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:23.781 "subtype": "NVMe", 00:12:23.781 "listen_addresses": [ 00:12:23.781 { 00:12:23.781 "transport": "VFIOUSER", 00:12:23.781 "trtype": "VFIOUSER", 00:12:23.781 "adrfam": "IPv4", 00:12:23.781 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:23.781 "trsvcid": "0" 00:12:23.781 } 00:12:23.781 ], 00:12:23.781 "allow_any_host": true, 00:12:23.781 "hosts": [], 00:12:23.781 "serial_number": "SPDK1", 00:12:23.781 "model_number": "SPDK bdev Controller", 00:12:23.781 "max_namespaces": 32, 00:12:23.781 "min_cntlid": 1, 00:12:23.781 "max_cntlid": 65519, 00:12:23.781 "namespaces": [ 00:12:23.781 { 00:12:23.781 "nsid": 1, 00:12:23.781 "bdev_name": "Malloc1", 00:12:23.781 "name": "Malloc1", 00:12:23.781 "nguid": "B5F9C716840845BA86216137599628A1", 00:12:23.781 "uuid": "b5f9c716-8408-45ba-8621-6137599628a1" 00:12:23.781 }, 00:12:23.781 { 00:12:23.781 "nsid": 2, 00:12:23.781 "bdev_name": "Malloc3", 00:12:23.781 "name": "Malloc3", 00:12:23.781 "nguid": "D4F1D92372B948178AB6CC2A8EA6707F", 00:12:23.781 "uuid": "d4f1d923-72b9-4817-8ab6-cc2a8ea6707f" 00:12:23.781 } 00:12:23.781 ] 00:12:23.781 }, 00:12:23.781 { 00:12:23.781 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:23.781 "subtype": "NVMe", 00:12:23.781 "listen_addresses": [ 00:12:23.781 { 00:12:23.781 "transport": "VFIOUSER", 00:12:23.781 "trtype": "VFIOUSER", 00:12:23.781 "adrfam": "IPv4", 00:12:23.781 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:23.781 "trsvcid": "0" 00:12:23.781 } 00:12:23.781 ], 00:12:23.781 "allow_any_host": true, 00:12:23.781 "hosts": [], 00:12:23.781 "serial_number": "SPDK2", 00:12:23.781 "model_number": "SPDK bdev Controller", 00:12:23.781 "max_namespaces": 32, 00:12:23.781 "min_cntlid": 1, 00:12:23.781 "max_cntlid": 65519, 00:12:23.781 "namespaces": [ 00:12:23.781 { 00:12:23.781 "nsid": 1, 00:12:23.781 "bdev_name": "Malloc2", 00:12:23.781 "name": "Malloc2", 00:12:23.781 "nguid": "B312EEC0128E42CABC6151D87062F72D", 00:12:23.781 "uuid": "b312eec0-128e-42ca-bc61-51d87062f72d" 00:12:23.781 } 00:12:23.781 ] 00:12:23.781 } 00:12:23.781 ] 00:12:23.781 22:04:05 -- target/nvmf_vfio_user.sh@44 -- # wait 3906148 00:12:23.781 22:04:05 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:23.781 22:04:05 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:23.781 22:04:05 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:23.781 22:04:05 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:23.781 [2024-04-24 22:04:05.924491] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:23.781 [2024-04-24 22:04:05.924543] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3906284 ] 00:12:23.781 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.781 [2024-04-24 22:04:05.963846] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:23.781 [2024-04-24 22:04:05.968690] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:23.781 [2024-04-24 22:04:05.968724] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f47396bb000 00:12:23.781 [2024-04-24 22:04:05.969684] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:23.781 [2024-04-24 22:04:05.970694] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:23.781 [2024-04-24 22:04:05.971700] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:23.781 [2024-04-24 22:04:05.972719] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:23.781 [2024-04-24 22:04:05.973728] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:23.781 [2024-04-24 22:04:05.974733] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:23.781 [2024-04-24 22:04:05.975739] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:23.781 [2024-04-24 22:04:05.978405] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:23.781 [2024-04-24 22:04:05.978762] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:23.781 [2024-04-24 22:04:05.978795] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f47396b0000 00:12:23.781 [2024-04-24 22:04:05.980074] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:23.781 [2024-04-24 22:04:05.995859] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:23.781 [2024-04-24 22:04:05.995897] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:23.781 [2024-04-24 22:04:06.001008] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:23.781 [2024-04-24 22:04:06.001068] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:23.781 [2024-04-24 22:04:06.001168] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:23.781 [2024-04-24 22:04:06.001198] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:23.781 [2024-04-24 22:04:06.001209] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:23.781 [2024-04-24 22:04:06.002015] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:23.781 [2024-04-24 22:04:06.002038] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:23.781 [2024-04-24 22:04:06.002053] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:23.781 [2024-04-24 22:04:06.003019] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:23.781 [2024-04-24 22:04:06.003042] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:23.781 [2024-04-24 22:04:06.003058] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:23.781 [2024-04-24 22:04:06.004027] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:23.781 [2024-04-24 22:04:06.004050] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:23.781 [2024-04-24 22:04:06.005033] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:23.781 [2024-04-24 22:04:06.005057] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:23.781 [2024-04-24 22:04:06.005068] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:23.781 [2024-04-24 22:04:06.005081] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:23.781 [2024-04-24 22:04:06.005193] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:23.781 [2024-04-24 22:04:06.005202] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:23.781 [2024-04-24 22:04:06.005212] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:23.781 [2024-04-24 22:04:06.006047] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:23.781 [2024-04-24 22:04:06.007050] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:23.781 [2024-04-24 22:04:06.008062] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:23.781 [2024-04-24 22:04:06.009057] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:23.782 [2024-04-24 22:04:06.009133] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:23.782 [2024-04-24 22:04:06.010083] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:23.782 [2024-04-24 22:04:06.010106] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:23.782 [2024-04-24 22:04:06.010117] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:23.782 [2024-04-24 22:04:06.010144] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:23.782 [2024-04-24 22:04:06.010164] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:23.782 [2024-04-24 22:04:06.010190] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:23.782 [2024-04-24 22:04:06.010201] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:23.782 [2024-04-24 22:04:06.010223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:23.782 [2024-04-24 22:04:06.016412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:23.782 [2024-04-24 22:04:06.016438] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:23.782 [2024-04-24 22:04:06.016449] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:23.782 [2024-04-24 22:04:06.016458] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:23.782 [2024-04-24 22:04:06.016467] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:23.782 [2024-04-24 22:04:06.016476] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:23.782 [2024-04-24 22:04:06.016485] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:23.782 [2024-04-24 22:04:06.016494] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:23.782 [2024-04-24 22:04:06.016509] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:23.782 [2024-04-24 22:04:06.016527] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:23.782 [2024-04-24 22:04:06.024410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:23.782 [2024-04-24 22:04:06.024442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.782 [2024-04-24 22:04:06.024458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.782 [2024-04-24 22:04:06.024476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.782 [2024-04-24 22:04:06.024491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.782 [2024-04-24 22:04:06.024501] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:23.782 [2024-04-24 22:04:06.024518] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:23.782 [2024-04-24 22:04:06.024534] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:23.782 [2024-04-24 22:04:06.032411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:23.782 [2024-04-24 22:04:06.032431] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:23.782 [2024-04-24 22:04:06.032442] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:23.782 [2024-04-24 22:04:06.032460] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:23.782 [2024-04-24 22:04:06.032472] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:23.782 [2024-04-24 22:04:06.032494] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:24.042 [2024-04-24 22:04:06.040409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:24.042 [2024-04-24 22:04:06.040480] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:24.042 [2024-04-24 22:04:06.040498] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:24.042 [2024-04-24 22:04:06.040513] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:24.042 [2024-04-24 22:04:06.040523] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:24.042 [2024-04-24 22:04:06.040534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:24.042 [2024-04-24 22:04:06.048406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:24.042 [2024-04-24 22:04:06.048432] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:24.042 [2024-04-24 22:04:06.048451] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:24.042 [2024-04-24 22:04:06.048468] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:24.042 [2024-04-24 22:04:06.048482] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:24.042 [2024-04-24 22:04:06.048491] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:24.042 [2024-04-24 22:04:06.048503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:24.042 [2024-04-24 22:04:06.056408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:24.042 [2024-04-24 22:04:06.056444] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:24.042 [2024-04-24 22:04:06.056463] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:24.042 [2024-04-24 22:04:06.056478] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:24.042 [2024-04-24 22:04:06.056487] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:24.042 [2024-04-24 22:04:06.056499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:24.042 [2024-04-24 22:04:06.064405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:24.042 [2024-04-24 22:04:06.064429] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:24.042 [2024-04-24 22:04:06.064444] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:24.042 [2024-04-24 22:04:06.064461] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:24.042 [2024-04-24 22:04:06.064474] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:24.042 [2024-04-24 22:04:06.064483] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:24.042 [2024-04-24 22:04:06.064494] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:24.042 [2024-04-24 22:04:06.064502] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:24.042 [2024-04-24 22:04:06.064512] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:24.042 [2024-04-24 22:04:06.064539] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:24.042 [2024-04-24 22:04:06.072407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:24.042 [2024-04-24 22:04:06.072437] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:24.042 [2024-04-24 22:04:06.080407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:24.042 [2024-04-24 22:04:06.080446] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:24.042 [2024-04-24 22:04:06.088406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:24.042 [2024-04-24 22:04:06.088434] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:24.042 [2024-04-24 22:04:06.096408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:24.042 [2024-04-24 22:04:06.096438] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:24.042 [2024-04-24 22:04:06.096449] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:24.042 [2024-04-24 22:04:06.096456] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:24.042 [2024-04-24 22:04:06.096463] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:24.042 [2024-04-24 22:04:06.096479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:24.042 [2024-04-24 22:04:06.096495] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:24.042 [2024-04-24 22:04:06.096505] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:24.042 [2024-04-24 22:04:06.096515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:24.042 [2024-04-24 22:04:06.096529] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:24.042 [2024-04-24 22:04:06.096538] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:24.042 [2024-04-24 22:04:06.096548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:24.042 [2024-04-24 22:04:06.096562] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:24.042 [2024-04-24 22:04:06.096572] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:24.042 [2024-04-24 22:04:06.096582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:24.042 [2024-04-24 22:04:06.104409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:24.042 [2024-04-24 22:04:06.104441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:24.042 [2024-04-24 22:04:06.104460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:24.042 [2024-04-24 22:04:06.104474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:24.042 ===================================================== 00:12:24.043 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:24.043 ===================================================== 00:12:24.043 Controller Capabilities/Features 00:12:24.043 ================================ 00:12:24.043 Vendor ID: 4e58 00:12:24.043 Subsystem Vendor ID: 4e58 00:12:24.043 Serial Number: SPDK2 00:12:24.043 Model Number: SPDK bdev Controller 00:12:24.043 Firmware Version: 24.05 00:12:24.043 Recommended Arb Burst: 6 00:12:24.043 IEEE OUI Identifier: 8d 6b 50 00:12:24.043 Multi-path I/O 00:12:24.043 May have multiple subsystem ports: Yes 00:12:24.043 May have multiple controllers: Yes 00:12:24.043 Associated with SR-IOV VF: No 00:12:24.043 Max Data Transfer Size: 131072 00:12:24.043 Max Number of Namespaces: 32 00:12:24.043 Max Number of I/O Queues: 127 00:12:24.043 NVMe Specification Version (VS): 1.3 00:12:24.043 NVMe Specification Version (Identify): 1.3 00:12:24.043 Maximum Queue Entries: 256 00:12:24.043 Contiguous Queues Required: Yes 00:12:24.043 Arbitration Mechanisms Supported 00:12:24.043 Weighted Round Robin: Not Supported 00:12:24.043 Vendor Specific: Not Supported 00:12:24.043 Reset Timeout: 15000 ms 00:12:24.043 Doorbell Stride: 4 bytes 00:12:24.043 NVM Subsystem Reset: Not Supported 00:12:24.043 Command Sets Supported 00:12:24.043 NVM Command Set: Supported 00:12:24.043 Boot Partition: Not Supported 00:12:24.043 Memory Page Size Minimum: 4096 bytes 00:12:24.043 Memory Page Size Maximum: 4096 bytes 00:12:24.043 Persistent Memory Region: Not Supported 00:12:24.043 Optional Asynchronous Events Supported 00:12:24.043 Namespace Attribute Notices: Supported 00:12:24.043 Firmware Activation Notices: Not Supported 00:12:24.043 ANA Change Notices: Not Supported 00:12:24.043 PLE Aggregate Log Change Notices: Not Supported 00:12:24.043 LBA Status Info Alert Notices: Not Supported 00:12:24.043 EGE Aggregate Log Change Notices: Not Supported 00:12:24.043 Normal NVM Subsystem Shutdown event: Not Supported 00:12:24.043 Zone Descriptor Change Notices: Not Supported 00:12:24.043 Discovery Log Change Notices: Not Supported 00:12:24.043 Controller Attributes 00:12:24.043 128-bit Host Identifier: Supported 00:12:24.043 Non-Operational Permissive Mode: Not Supported 00:12:24.043 NVM Sets: Not Supported 00:12:24.043 Read Recovery Levels: Not Supported 00:12:24.043 Endurance Groups: Not Supported 00:12:24.043 Predictable Latency Mode: Not Supported 00:12:24.043 Traffic Based Keep ALive: Not Supported 00:12:24.043 Namespace Granularity: Not Supported 00:12:24.043 SQ Associations: Not Supported 00:12:24.043 UUID List: Not Supported 00:12:24.043 Multi-Domain Subsystem: Not Supported 00:12:24.043 Fixed Capacity Management: Not Supported 00:12:24.043 Variable Capacity Management: Not Supported 00:12:24.043 Delete Endurance Group: Not Supported 00:12:24.043 Delete NVM Set: Not Supported 00:12:24.043 Extended LBA Formats Supported: Not Supported 00:12:24.043 Flexible Data Placement Supported: Not Supported 00:12:24.043 00:12:24.043 Controller Memory Buffer Support 00:12:24.043 ================================ 00:12:24.043 Supported: No 00:12:24.043 00:12:24.043 Persistent Memory Region Support 00:12:24.043 ================================ 00:12:24.043 Supported: No 00:12:24.043 00:12:24.043 Admin Command Set Attributes 00:12:24.043 ============================ 00:12:24.043 Security Send/Receive: Not Supported 00:12:24.043 Format NVM: Not Supported 00:12:24.043 Firmware Activate/Download: Not Supported 00:12:24.043 Namespace Management: Not Supported 00:12:24.043 Device Self-Test: Not Supported 00:12:24.043 Directives: Not Supported 00:12:24.043 NVMe-MI: Not Supported 00:12:24.043 Virtualization Management: Not Supported 00:12:24.043 Doorbell Buffer Config: Not Supported 00:12:24.043 Get LBA Status Capability: Not Supported 00:12:24.043 Command & Feature Lockdown Capability: Not Supported 00:12:24.043 Abort Command Limit: 4 00:12:24.043 Async Event Request Limit: 4 00:12:24.043 Number of Firmware Slots: N/A 00:12:24.043 Firmware Slot 1 Read-Only: N/A 00:12:24.043 Firmware Activation Without Reset: N/A 00:12:24.043 Multiple Update Detection Support: N/A 00:12:24.043 Firmware Update Granularity: No Information Provided 00:12:24.043 Per-Namespace SMART Log: No 00:12:24.043 Asymmetric Namespace Access Log Page: Not Supported 00:12:24.043 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:24.043 Command Effects Log Page: Supported 00:12:24.043 Get Log Page Extended Data: Supported 00:12:24.043 Telemetry Log Pages: Not Supported 00:12:24.043 Persistent Event Log Pages: Not Supported 00:12:24.043 Supported Log Pages Log Page: May Support 00:12:24.043 Commands Supported & Effects Log Page: Not Supported 00:12:24.043 Feature Identifiers & Effects Log Page:May Support 00:12:24.043 NVMe-MI Commands & Effects Log Page: May Support 00:12:24.043 Data Area 4 for Telemetry Log: Not Supported 00:12:24.043 Error Log Page Entries Supported: 128 00:12:24.043 Keep Alive: Supported 00:12:24.043 Keep Alive Granularity: 10000 ms 00:12:24.043 00:12:24.043 NVM Command Set Attributes 00:12:24.043 ========================== 00:12:24.043 Submission Queue Entry Size 00:12:24.043 Max: 64 00:12:24.043 Min: 64 00:12:24.043 Completion Queue Entry Size 00:12:24.043 Max: 16 00:12:24.043 Min: 16 00:12:24.043 Number of Namespaces: 32 00:12:24.043 Compare Command: Supported 00:12:24.043 Write Uncorrectable Command: Not Supported 00:12:24.043 Dataset Management Command: Supported 00:12:24.043 Write Zeroes Command: Supported 00:12:24.043 Set Features Save Field: Not Supported 00:12:24.043 Reservations: Not Supported 00:12:24.043 Timestamp: Not Supported 00:12:24.043 Copy: Supported 00:12:24.043 Volatile Write Cache: Present 00:12:24.043 Atomic Write Unit (Normal): 1 00:12:24.043 Atomic Write Unit (PFail): 1 00:12:24.043 Atomic Compare & Write Unit: 1 00:12:24.043 Fused Compare & Write: Supported 00:12:24.043 Scatter-Gather List 00:12:24.043 SGL Command Set: Supported (Dword aligned) 00:12:24.043 SGL Keyed: Not Supported 00:12:24.043 SGL Bit Bucket Descriptor: Not Supported 00:12:24.043 SGL Metadata Pointer: Not Supported 00:12:24.043 Oversized SGL: Not Supported 00:12:24.043 SGL Metadata Address: Not Supported 00:12:24.043 SGL Offset: Not Supported 00:12:24.043 Transport SGL Data Block: Not Supported 00:12:24.043 Replay Protected Memory Block: Not Supported 00:12:24.043 00:12:24.043 Firmware Slot Information 00:12:24.043 ========================= 00:12:24.043 Active slot: 1 00:12:24.043 Slot 1 Firmware Revision: 24.05 00:12:24.043 00:12:24.043 00:12:24.043 Commands Supported and Effects 00:12:24.043 ============================== 00:12:24.043 Admin Commands 00:12:24.043 -------------- 00:12:24.043 Get Log Page (02h): Supported 00:12:24.043 Identify (06h): Supported 00:12:24.043 Abort (08h): Supported 00:12:24.043 Set Features (09h): Supported 00:12:24.043 Get Features (0Ah): Supported 00:12:24.043 Asynchronous Event Request (0Ch): Supported 00:12:24.043 Keep Alive (18h): Supported 00:12:24.043 I/O Commands 00:12:24.043 ------------ 00:12:24.043 Flush (00h): Supported LBA-Change 00:12:24.043 Write (01h): Supported LBA-Change 00:12:24.043 Read (02h): Supported 00:12:24.043 Compare (05h): Supported 00:12:24.043 Write Zeroes (08h): Supported LBA-Change 00:12:24.043 Dataset Management (09h): Supported LBA-Change 00:12:24.043 Copy (19h): Supported LBA-Change 00:12:24.043 Unknown (79h): Supported LBA-Change 00:12:24.043 Unknown (7Ah): Supported 00:12:24.043 00:12:24.043 Error Log 00:12:24.043 ========= 00:12:24.043 00:12:24.043 Arbitration 00:12:24.043 =========== 00:12:24.043 Arbitration Burst: 1 00:12:24.043 00:12:24.043 Power Management 00:12:24.043 ================ 00:12:24.043 Number of Power States: 1 00:12:24.043 Current Power State: Power State #0 00:12:24.043 Power State #0: 00:12:24.043 Max Power: 0.00 W 00:12:24.043 Non-Operational State: Operational 00:12:24.043 Entry Latency: Not Reported 00:12:24.043 Exit Latency: Not Reported 00:12:24.043 Relative Read Throughput: 0 00:12:24.043 Relative Read Latency: 0 00:12:24.043 Relative Write Throughput: 0 00:12:24.043 Relative Write Latency: 0 00:12:24.043 Idle Power: Not Reported 00:12:24.043 Active Power: Not Reported 00:12:24.043 Non-Operational Permissive Mode: Not Supported 00:12:24.043 00:12:24.043 Health Information 00:12:24.043 ================== 00:12:24.043 Critical Warnings: 00:12:24.043 Available Spare Space: OK 00:12:24.043 Temperature: OK 00:12:24.043 Device Reliability: OK 00:12:24.043 Read Only: No 00:12:24.043 Volatile Memory Backup: OK 00:12:24.043 Current Temperature: 0 Kelvin (-2[2024-04-24 22:04:06.104623] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:24.044 [2024-04-24 22:04:06.112410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:24.044 [2024-04-24 22:04:06.112463] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:24.044 [2024-04-24 22:04:06.112481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.044 [2024-04-24 22:04:06.112494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.044 [2024-04-24 22:04:06.112506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.044 [2024-04-24 22:04:06.112517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.044 [2024-04-24 22:04:06.112608] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:24.044 [2024-04-24 22:04:06.112633] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:24.044 [2024-04-24 22:04:06.113616] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:24.044 [2024-04-24 22:04:06.113697] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:24.044 [2024-04-24 22:04:06.113715] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:24.044 [2024-04-24 22:04:06.114628] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:24.044 [2024-04-24 22:04:06.114661] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:24.044 [2024-04-24 22:04:06.114723] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:24.044 [2024-04-24 22:04:06.117408] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:24.044 73 Celsius) 00:12:24.044 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:24.044 Available Spare: 0% 00:12:24.044 Available Spare Threshold: 0% 00:12:24.044 Life Percentage Used: 0% 00:12:24.044 Data Units Read: 0 00:12:24.044 Data Units Written: 0 00:12:24.044 Host Read Commands: 0 00:12:24.044 Host Write Commands: 0 00:12:24.044 Controller Busy Time: 0 minutes 00:12:24.044 Power Cycles: 0 00:12:24.044 Power On Hours: 0 hours 00:12:24.044 Unsafe Shutdowns: 0 00:12:24.044 Unrecoverable Media Errors: 0 00:12:24.044 Lifetime Error Log Entries: 0 00:12:24.044 Warning Temperature Time: 0 minutes 00:12:24.044 Critical Temperature Time: 0 minutes 00:12:24.044 00:12:24.044 Number of Queues 00:12:24.044 ================ 00:12:24.044 Number of I/O Submission Queues: 127 00:12:24.044 Number of I/O Completion Queues: 127 00:12:24.044 00:12:24.044 Active Namespaces 00:12:24.044 ================= 00:12:24.044 Namespace ID:1 00:12:24.044 Error Recovery Timeout: Unlimited 00:12:24.044 Command Set Identifier: NVM (00h) 00:12:24.044 Deallocate: Supported 00:12:24.044 Deallocated/Unwritten Error: Not Supported 00:12:24.044 Deallocated Read Value: Unknown 00:12:24.044 Deallocate in Write Zeroes: Not Supported 00:12:24.044 Deallocated Guard Field: 0xFFFF 00:12:24.044 Flush: Supported 00:12:24.044 Reservation: Supported 00:12:24.044 Namespace Sharing Capabilities: Multiple Controllers 00:12:24.044 Size (in LBAs): 131072 (0GiB) 00:12:24.044 Capacity (in LBAs): 131072 (0GiB) 00:12:24.044 Utilization (in LBAs): 131072 (0GiB) 00:12:24.044 NGUID: B312EEC0128E42CABC6151D87062F72D 00:12:24.044 UUID: b312eec0-128e-42ca-bc61-51d87062f72d 00:12:24.044 Thin Provisioning: Not Supported 00:12:24.044 Per-NS Atomic Units: Yes 00:12:24.044 Atomic Boundary Size (Normal): 0 00:12:24.044 Atomic Boundary Size (PFail): 0 00:12:24.044 Atomic Boundary Offset: 0 00:12:24.044 Maximum Single Source Range Length: 65535 00:12:24.044 Maximum Copy Length: 65535 00:12:24.044 Maximum Source Range Count: 1 00:12:24.044 NGUID/EUI64 Never Reused: No 00:12:24.044 Namespace Write Protected: No 00:12:24.044 Number of LBA Formats: 1 00:12:24.044 Current LBA Format: LBA Format #00 00:12:24.044 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:24.044 00:12:24.044 22:04:06 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:24.044 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.361 [2024-04-24 22:04:06.360207] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:29.653 [2024-04-24 22:04:11.465774] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:29.653 Initializing NVMe Controllers 00:12:29.654 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:29.654 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:29.654 Initialization complete. Launching workers. 00:12:29.654 ======================================================== 00:12:29.654 Latency(us) 00:12:29.654 Device Information : IOPS MiB/s Average min max 00:12:29.654 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 25537.50 99.76 5011.12 1391.99 7603.66 00:12:29.654 ======================================================== 00:12:29.654 Total : 25537.50 99.76 5011.12 1391.99 7603.66 00:12:29.654 00:12:29.654 22:04:11 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:29.654 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.654 [2024-04-24 22:04:11.711519] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:34.926 [2024-04-24 22:04:16.734390] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:34.926 Initializing NVMe Controllers 00:12:34.926 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:34.926 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:34.926 Initialization complete. Launching workers. 00:12:34.926 ======================================================== 00:12:34.926 Latency(us) 00:12:34.926 Device Information : IOPS MiB/s Average min max 00:12:34.926 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 25205.47 98.46 5076.60 1396.53 10040.60 00:12:34.926 ======================================================== 00:12:34.927 Total : 25205.47 98.46 5076.60 1396.53 10040.60 00:12:34.927 00:12:34.927 22:04:16 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:34.927 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.927 [2024-04-24 22:04:16.963562] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:40.196 [2024-04-24 22:04:22.123828] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:40.196 Initializing NVMe Controllers 00:12:40.196 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:40.196 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:40.196 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:40.196 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:40.196 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:40.196 Initialization complete. Launching workers. 00:12:40.196 Starting thread on core 2 00:12:40.196 Starting thread on core 3 00:12:40.196 Starting thread on core 1 00:12:40.196 22:04:22 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:40.196 EAL: No free 2048 kB hugepages reported on node 1 00:12:40.455 [2024-04-24 22:04:22.452969] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:43.757 [2024-04-24 22:04:25.518806] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:43.757 Initializing NVMe Controllers 00:12:43.757 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:43.757 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:43.757 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:43.757 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:43.757 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:43.757 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:43.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:43.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:43.757 Initialization complete. Launching workers. 00:12:43.757 Starting thread on core 1 with urgent priority queue 00:12:43.757 Starting thread on core 2 with urgent priority queue 00:12:43.757 Starting thread on core 3 with urgent priority queue 00:12:43.757 Starting thread on core 0 with urgent priority queue 00:12:43.757 SPDK bdev Controller (SPDK2 ) core 0: 5182.67 IO/s 19.30 secs/100000 ios 00:12:43.757 SPDK bdev Controller (SPDK2 ) core 1: 4489.00 IO/s 22.28 secs/100000 ios 00:12:43.757 SPDK bdev Controller (SPDK2 ) core 2: 5077.33 IO/s 19.70 secs/100000 ios 00:12:43.757 SPDK bdev Controller (SPDK2 ) core 3: 5132.67 IO/s 19.48 secs/100000 ios 00:12:43.757 ======================================================== 00:12:43.757 00:12:43.757 22:04:25 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:43.757 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.757 [2024-04-24 22:04:25.842914] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:43.757 [2024-04-24 22:04:25.856196] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:43.757 Initializing NVMe Controllers 00:12:43.757 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:43.757 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:43.757 Namespace ID: 1 size: 0GB 00:12:43.757 Initialization complete. 00:12:43.757 INFO: using host memory buffer for IO 00:12:43.757 Hello world! 00:12:43.757 22:04:25 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:43.757 EAL: No free 2048 kB hugepages reported on node 1 00:12:44.016 [2024-04-24 22:04:26.176215] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:45.393 Initializing NVMe Controllers 00:12:45.393 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:45.393 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:45.393 Initialization complete. Launching workers. 00:12:45.393 submit (in ns) avg, min, max = 9555.5, 4124.4, 4005814.8 00:12:45.393 complete (in ns) avg, min, max = 31007.3, 2426.7, 4007728.9 00:12:45.393 00:12:45.393 Submit histogram 00:12:45.393 ================ 00:12:45.393 Range in us Cumulative Count 00:12:45.393 4.124 - 4.148: 0.0415% ( 5) 00:12:45.393 4.148 - 4.172: 0.2575% ( 26) 00:12:45.393 4.172 - 4.196: 0.9718% ( 86) 00:12:45.393 4.196 - 4.219: 2.9983% ( 244) 00:12:45.393 4.219 - 4.243: 6.7525% ( 452) 00:12:45.393 4.243 - 4.267: 12.0847% ( 642) 00:12:45.393 4.267 - 4.290: 18.6462% ( 790) 00:12:45.393 4.290 - 4.314: 27.5166% ( 1068) 00:12:45.393 4.314 - 4.338: 35.0581% ( 908) 00:12:45.393 4.338 - 4.361: 41.5781% ( 785) 00:12:45.393 4.361 - 4.385: 46.5864% ( 603) 00:12:45.393 4.385 - 4.409: 48.8621% ( 274) 00:12:45.393 4.409 - 4.433: 50.4319% ( 189) 00:12:45.393 4.433 - 4.456: 53.1478% ( 327) 00:12:45.393 4.456 - 4.480: 56.4535% ( 398) 00:12:45.393 4.480 - 4.504: 61.0382% ( 552) 00:12:45.393 4.504 - 4.527: 66.1462% ( 615) 00:12:45.393 4.527 - 4.551: 70.0914% ( 475) 00:12:45.393 4.551 - 4.575: 73.0316% ( 354) 00:12:45.393 4.575 - 4.599: 74.4767% ( 174) 00:12:45.393 4.599 - 4.622: 75.3654% ( 107) 00:12:45.393 4.622 - 4.646: 76.0963% ( 88) 00:12:45.393 4.646 - 4.670: 76.5449% ( 54) 00:12:45.393 4.670 - 4.693: 77.0017% ( 55) 00:12:45.393 4.693 - 4.717: 77.7492% ( 90) 00:12:45.393 4.717 - 4.741: 78.3804% ( 76) 00:12:45.393 4.741 - 4.764: 78.5880% ( 25) 00:12:45.393 4.764 - 4.788: 78.7043% ( 14) 00:12:45.393 4.788 - 4.812: 78.7708% ( 8) 00:12:45.393 4.812 - 4.836: 78.7957% ( 3) 00:12:45.393 4.836 - 4.859: 79.0449% ( 30) 00:12:45.393 4.859 - 4.883: 80.9801% ( 233) 00:12:45.393 4.883 - 4.907: 85.8887% ( 591) 00:12:45.393 4.907 - 4.930: 94.4269% ( 1028) 00:12:45.393 4.930 - 4.954: 95.8887% ( 176) 00:12:45.393 4.954 - 4.978: 96.5116% ( 75) 00:12:45.393 4.978 - 5.001: 96.7608% ( 30) 00:12:45.393 5.001 - 5.025: 96.8605% ( 12) 00:12:45.393 5.025 - 5.049: 96.9269% ( 8) 00:12:45.393 5.049 - 5.073: 96.9934% ( 8) 00:12:45.393 5.073 - 5.096: 97.0266% ( 4) 00:12:45.393 5.096 - 5.120: 97.0930% ( 8) 00:12:45.393 5.120 - 5.144: 97.1844% ( 11) 00:12:45.393 5.144 - 5.167: 97.3339% ( 18) 00:12:45.393 5.167 - 5.191: 97.5332% ( 24) 00:12:45.393 5.191 - 5.215: 97.7409% ( 25) 00:12:45.393 5.215 - 5.239: 97.8239% ( 10) 00:12:45.393 5.239 - 5.262: 97.8904% ( 8) 00:12:45.393 5.262 - 5.286: 97.9651% ( 9) 00:12:45.393 5.286 - 5.310: 98.0233% ( 7) 00:12:45.393 5.310 - 5.333: 98.0482% ( 3) 00:12:45.393 5.333 - 5.357: 98.0814% ( 4) 00:12:45.393 5.357 - 5.381: 98.1478% ( 8) 00:12:45.393 5.381 - 5.404: 98.1894% ( 5) 00:12:45.393 5.404 - 5.428: 98.2226% ( 4) 00:12:45.393 5.428 - 5.452: 98.2558% ( 4) 00:12:45.393 5.452 - 5.476: 98.3140% ( 7) 00:12:45.393 5.476 - 5.499: 98.3555% ( 5) 00:12:45.393 5.499 - 5.523: 98.3887% ( 4) 00:12:45.393 5.523 - 5.547: 98.4136% ( 3) 00:12:45.393 5.547 - 5.570: 98.4302% ( 2) 00:12:45.393 5.570 - 5.594: 98.4468% ( 2) 00:12:45.393 5.594 - 5.618: 98.4884% ( 5) 00:12:45.393 5.618 - 5.641: 98.5050% ( 2) 00:12:45.393 5.641 - 5.665: 98.5216% ( 2) 00:12:45.393 5.665 - 5.689: 98.5382% ( 2) 00:12:45.393 5.689 - 5.713: 98.5548% ( 2) 00:12:45.393 5.736 - 5.760: 98.5631% ( 1) 00:12:45.393 5.784 - 5.807: 98.5797% ( 2) 00:12:45.393 5.807 - 5.831: 98.5880% ( 1) 00:12:45.393 5.831 - 5.855: 98.6130% ( 3) 00:12:45.393 5.879 - 5.902: 98.6213% ( 1) 00:12:45.393 5.902 - 5.926: 98.6296% ( 1) 00:12:45.393 5.973 - 5.997: 98.6379% ( 1) 00:12:45.393 5.997 - 6.021: 98.6462% ( 1) 00:12:45.393 6.044 - 6.068: 98.6545% ( 1) 00:12:45.393 6.116 - 6.163: 98.6628% ( 1) 00:12:45.393 6.163 - 6.210: 98.6794% ( 2) 00:12:45.393 6.210 - 6.258: 98.6877% ( 1) 00:12:45.393 6.305 - 6.353: 98.6960% ( 1) 00:12:45.393 6.353 - 6.400: 98.7043% ( 1) 00:12:45.393 6.400 - 6.447: 98.7126% ( 1) 00:12:45.393 6.447 - 6.495: 98.7292% ( 2) 00:12:45.393 6.542 - 6.590: 98.7458% ( 2) 00:12:45.393 6.590 - 6.637: 98.7542% ( 1) 00:12:45.393 6.684 - 6.732: 98.7625% ( 1) 00:12:45.393 6.732 - 6.779: 98.7708% ( 1) 00:12:45.393 6.779 - 6.827: 98.7791% ( 1) 00:12:45.393 6.827 - 6.874: 98.7874% ( 1) 00:12:45.393 6.874 - 6.921: 98.7957% ( 1) 00:12:45.393 7.159 - 7.206: 98.8040% ( 1) 00:12:45.393 7.206 - 7.253: 98.8123% ( 1) 00:12:45.393 7.301 - 7.348: 98.8206% ( 1) 00:12:45.393 7.396 - 7.443: 98.8289% ( 1) 00:12:45.393 7.538 - 7.585: 98.8372% ( 1) 00:12:45.393 7.727 - 7.775: 98.8621% ( 3) 00:12:45.393 8.059 - 8.107: 98.8787% ( 2) 00:12:45.393 8.201 - 8.249: 98.8870% ( 1) 00:12:45.393 8.344 - 8.391: 98.9037% ( 2) 00:12:45.393 8.486 - 8.533: 98.9203% ( 2) 00:12:45.393 8.533 - 8.581: 98.9286% ( 1) 00:12:45.393 8.581 - 8.628: 98.9452% ( 2) 00:12:45.393 8.628 - 8.676: 98.9535% ( 1) 00:12:45.393 8.676 - 8.723: 98.9618% ( 1) 00:12:45.393 8.723 - 8.770: 98.9867% ( 3) 00:12:45.393 8.770 - 8.818: 99.0199% ( 4) 00:12:45.393 8.865 - 8.913: 99.0449% ( 3) 00:12:45.393 8.913 - 8.960: 99.0698% ( 3) 00:12:45.393 8.960 - 9.007: 99.0781% ( 1) 00:12:45.393 9.055 - 9.102: 99.0947% ( 2) 00:12:45.393 9.102 - 9.150: 99.1030% ( 1) 00:12:45.393 9.244 - 9.292: 99.1196% ( 2) 00:12:45.393 9.339 - 9.387: 99.1362% ( 2) 00:12:45.393 9.387 - 9.434: 99.1445% ( 1) 00:12:45.393 9.434 - 9.481: 99.1611% ( 2) 00:12:45.393 9.481 - 9.529: 99.1777% ( 2) 00:12:45.393 9.576 - 9.624: 99.1860% ( 1) 00:12:45.393 9.671 - 9.719: 99.1944% ( 1) 00:12:45.393 9.719 - 9.766: 99.2110% ( 2) 00:12:45.393 9.766 - 9.813: 99.2193% ( 1) 00:12:45.393 9.813 - 9.861: 99.2276% ( 1) 00:12:45.393 9.908 - 9.956: 99.2525% ( 3) 00:12:45.393 9.956 - 10.003: 99.2608% ( 1) 00:12:45.393 10.003 - 10.050: 99.3023% ( 5) 00:12:45.393 10.050 - 10.098: 99.3189% ( 2) 00:12:45.393 10.098 - 10.145: 99.3272% ( 1) 00:12:45.393 10.145 - 10.193: 99.3439% ( 2) 00:12:45.393 10.240 - 10.287: 99.3605% ( 2) 00:12:45.393 10.287 - 10.335: 99.3771% ( 2) 00:12:45.394 10.477 - 10.524: 99.3937% ( 2) 00:12:45.394 10.524 - 10.572: 99.4020% ( 1) 00:12:45.394 10.619 - 10.667: 99.4186% ( 2) 00:12:45.394 10.761 - 10.809: 99.4269% ( 1) 00:12:45.394 10.856 - 10.904: 99.4352% ( 1) 00:12:45.394 10.904 - 10.951: 99.4435% ( 1) 00:12:45.394 10.999 - 11.046: 99.4518% ( 1) 00:12:45.394 11.141 - 11.188: 99.4601% ( 1) 00:12:45.394 11.188 - 11.236: 99.4767% ( 2) 00:12:45.394 11.330 - 11.378: 99.4850% ( 1) 00:12:45.394 11.567 - 11.615: 99.4934% ( 1) 00:12:45.394 11.615 - 11.662: 99.5100% ( 2) 00:12:45.394 11.662 - 11.710: 99.5266% ( 2) 00:12:45.394 11.757 - 11.804: 99.5349% ( 1) 00:12:45.394 11.852 - 11.899: 99.5432% ( 1) 00:12:45.394 11.947 - 11.994: 99.5598% ( 2) 00:12:45.394 12.041 - 12.089: 99.5681% ( 1) 00:12:45.394 12.136 - 12.231: 99.5764% ( 1) 00:12:45.394 12.231 - 12.326: 99.5847% ( 1) 00:12:45.394 12.421 - 12.516: 99.5930% ( 1) 00:12:45.394 12.516 - 12.610: 99.6013% ( 1) 00:12:45.394 12.610 - 12.705: 99.6096% ( 1) 00:12:45.394 12.705 - 12.800: 99.6179% ( 1) 00:12:45.394 13.179 - 13.274: 99.6262% ( 1) 00:12:45.394 13.274 - 13.369: 99.6512% ( 3) 00:12:45.394 13.369 - 13.464: 99.6678% ( 2) 00:12:45.394 13.464 - 13.559: 99.6927% ( 3) 00:12:45.394 13.653 - 13.748: 99.7010% ( 1) 00:12:45.394 13.748 - 13.843: 99.7176% ( 2) 00:12:45.394 13.843 - 13.938: 99.7259% ( 1) 00:12:45.394 13.938 - 14.033: 99.7508% ( 3) 00:12:45.394 14.222 - 14.317: 99.7591% ( 1) 00:12:45.394 14.317 - 14.412: 99.7757% ( 2) 00:12:45.394 14.412 - 14.507: 99.7924% ( 2) 00:12:45.394 14.696 - 14.791: 99.8090% ( 2) 00:12:45.394 14.791 - 14.886: 99.8173% ( 1) 00:12:45.394 14.981 - 15.076: 99.8256% ( 1) 00:12:45.394 15.07[2024-04-24 22:04:27.271548] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:45.394 6 - 15.170: 99.8339% ( 1) 00:12:45.394 15.265 - 15.360: 99.8422% ( 1) 00:12:45.394 15.644 - 15.739: 99.8505% ( 1) 00:12:45.394 16.403 - 16.498: 99.8588% ( 1) 00:12:45.394 19.342 - 19.437: 99.8671% ( 1) 00:12:45.394 21.333 - 21.428: 99.8754% ( 1) 00:12:45.394 3980.705 - 4004.978: 99.9917% ( 14) 00:12:45.394 4004.978 - 4029.250: 100.0000% ( 1) 00:12:45.394 00:12:45.394 Complete histogram 00:12:45.394 ================== 00:12:45.394 Range in us Cumulative Count 00:12:45.394 2.418 - 2.430: 0.0166% ( 2) 00:12:45.394 2.430 - 2.441: 4.3937% ( 527) 00:12:45.394 2.441 - 2.453: 16.2542% ( 1428) 00:12:45.394 2.453 - 2.465: 20.0415% ( 456) 00:12:45.394 2.465 - 2.477: 30.1080% ( 1212) 00:12:45.394 2.477 - 2.489: 68.7625% ( 4654) 00:12:45.394 2.489 - 2.501: 88.7625% ( 2408) 00:12:45.394 2.501 - 2.513: 93.1977% ( 534) 00:12:45.394 2.513 - 2.524: 96.4203% ( 388) 00:12:45.394 2.524 - 2.536: 97.8571% ( 173) 00:12:45.394 2.536 - 2.548: 98.2973% ( 53) 00:12:45.394 2.548 - 2.560: 98.4635% ( 20) 00:12:45.394 2.560 - 2.572: 98.5880% ( 15) 00:12:45.394 2.572 - 2.584: 98.6711% ( 10) 00:12:45.394 2.584 - 2.596: 98.6877% ( 2) 00:12:45.394 2.596 - 2.607: 98.7209% ( 4) 00:12:45.394 2.631 - 2.643: 98.7292% ( 1) 00:12:45.394 2.690 - 2.702: 98.7542% ( 3) 00:12:45.394 2.702 - 2.714: 98.7625% ( 1) 00:12:45.394 2.714 - 2.726: 98.7957% ( 4) 00:12:45.394 2.726 - 2.738: 98.8040% ( 1) 00:12:45.394 2.738 - 2.750: 98.8206% ( 2) 00:12:45.394 2.750 - 2.761: 98.8372% ( 2) 00:12:45.394 2.761 - 2.773: 98.8621% ( 3) 00:12:45.394 2.785 - 2.797: 98.8787% ( 2) 00:12:45.394 2.797 - 2.809: 98.8870% ( 1) 00:12:45.394 2.844 - 2.856: 98.8953% ( 1) 00:12:45.394 2.987 - 2.999: 98.9037% ( 1) 00:12:45.394 3.010 - 3.022: 98.9120% ( 1) 00:12:45.394 3.224 - 3.247: 98.9203% ( 1) 00:12:45.394 3.319 - 3.342: 98.9286% ( 1) 00:12:45.394 3.342 - 3.366: 98.9535% ( 3) 00:12:45.394 3.390 - 3.413: 98.9784% ( 3) 00:12:45.394 3.413 - 3.437: 98.9950% ( 2) 00:12:45.394 3.437 - 3.461: 99.0033% ( 1) 00:12:45.394 3.461 - 3.484: 99.0282% ( 3) 00:12:45.394 3.484 - 3.508: 99.0365% ( 1) 00:12:45.394 3.508 - 3.532: 99.0449% ( 1) 00:12:45.394 3.556 - 3.579: 99.0615% ( 2) 00:12:45.394 3.603 - 3.627: 99.0698% ( 1) 00:12:45.394 3.627 - 3.650: 99.0781% ( 1) 00:12:45.394 4.338 - 4.361: 99.0864% ( 1) 00:12:45.394 4.361 - 4.385: 99.0947% ( 1) 00:12:45.394 4.551 - 4.575: 99.1030% ( 1) 00:12:45.394 4.741 - 4.764: 99.1113% ( 1) 00:12:45.394 4.859 - 4.883: 99.1196% ( 1) 00:12:45.394 5.902 - 5.926: 99.1279% ( 1) 00:12:45.394 6.021 - 6.044: 99.1362% ( 1) 00:12:45.394 6.827 - 6.874: 99.1445% ( 1) 00:12:45.394 6.874 - 6.921: 99.1611% ( 2) 00:12:45.394 7.396 - 7.443: 99.1694% ( 1) 00:12:45.394 7.443 - 7.490: 99.1777% ( 1) 00:12:45.394 7.585 - 7.633: 99.1860% ( 1) 00:12:45.394 7.680 - 7.727: 99.1944% ( 1) 00:12:45.394 7.727 - 7.775: 99.2027% ( 1) 00:12:45.394 7.822 - 7.870: 99.2110% ( 1) 00:12:45.394 8.107 - 8.154: 99.2193% ( 1) 00:12:45.394 8.201 - 8.249: 99.2276% ( 1) 00:12:45.394 8.249 - 8.296: 99.2359% ( 1) 00:12:45.394 8.439 - 8.486: 99.2442% ( 1) 00:12:45.394 9.102 - 9.150: 99.2525% ( 1) 00:12:45.394 9.292 - 9.339: 99.2608% ( 1) 00:12:45.394 10.667 - 10.714: 99.2691% ( 1) 00:12:45.394 10.856 - 10.904: 99.2774% ( 1) 00:12:45.394 24.273 - 24.462: 99.2857% ( 1) 00:12:45.394 3398.163 - 3422.436: 99.2940% ( 1) 00:12:45.394 3980.705 - 4004.978: 99.9834% ( 83) 00:12:45.394 4004.978 - 4029.250: 100.0000% ( 2) 00:12:45.394 00:12:45.394 22:04:27 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:45.394 22:04:27 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:45.394 22:04:27 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:45.394 22:04:27 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:45.394 22:04:27 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:45.394 [ 00:12:45.394 { 00:12:45.394 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:45.394 "subtype": "Discovery", 00:12:45.394 "listen_addresses": [], 00:12:45.394 "allow_any_host": true, 00:12:45.394 "hosts": [] 00:12:45.394 }, 00:12:45.394 { 00:12:45.394 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:45.394 "subtype": "NVMe", 00:12:45.394 "listen_addresses": [ 00:12:45.394 { 00:12:45.394 "transport": "VFIOUSER", 00:12:45.394 "trtype": "VFIOUSER", 00:12:45.394 "adrfam": "IPv4", 00:12:45.394 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:45.394 "trsvcid": "0" 00:12:45.394 } 00:12:45.394 ], 00:12:45.394 "allow_any_host": true, 00:12:45.394 "hosts": [], 00:12:45.394 "serial_number": "SPDK1", 00:12:45.394 "model_number": "SPDK bdev Controller", 00:12:45.394 "max_namespaces": 32, 00:12:45.394 "min_cntlid": 1, 00:12:45.394 "max_cntlid": 65519, 00:12:45.394 "namespaces": [ 00:12:45.394 { 00:12:45.395 "nsid": 1, 00:12:45.395 "bdev_name": "Malloc1", 00:12:45.395 "name": "Malloc1", 00:12:45.395 "nguid": "B5F9C716840845BA86216137599628A1", 00:12:45.395 "uuid": "b5f9c716-8408-45ba-8621-6137599628a1" 00:12:45.395 }, 00:12:45.395 { 00:12:45.395 "nsid": 2, 00:12:45.395 "bdev_name": "Malloc3", 00:12:45.395 "name": "Malloc3", 00:12:45.395 "nguid": "D4F1D92372B948178AB6CC2A8EA6707F", 00:12:45.395 "uuid": "d4f1d923-72b9-4817-8ab6-cc2a8ea6707f" 00:12:45.395 } 00:12:45.395 ] 00:12:45.395 }, 00:12:45.395 { 00:12:45.395 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:45.395 "subtype": "NVMe", 00:12:45.395 "listen_addresses": [ 00:12:45.395 { 00:12:45.395 "transport": "VFIOUSER", 00:12:45.395 "trtype": "VFIOUSER", 00:12:45.395 "adrfam": "IPv4", 00:12:45.395 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:45.395 "trsvcid": "0" 00:12:45.395 } 00:12:45.395 ], 00:12:45.395 "allow_any_host": true, 00:12:45.395 "hosts": [], 00:12:45.395 "serial_number": "SPDK2", 00:12:45.395 "model_number": "SPDK bdev Controller", 00:12:45.395 "max_namespaces": 32, 00:12:45.395 "min_cntlid": 1, 00:12:45.395 "max_cntlid": 65519, 00:12:45.395 "namespaces": [ 00:12:45.395 { 00:12:45.395 "nsid": 1, 00:12:45.395 "bdev_name": "Malloc2", 00:12:45.395 "name": "Malloc2", 00:12:45.395 "nguid": "B312EEC0128E42CABC6151D87062F72D", 00:12:45.395 "uuid": "b312eec0-128e-42ca-bc61-51d87062f72d" 00:12:45.395 } 00:12:45.395 ] 00:12:45.395 } 00:12:45.395 ] 00:12:45.395 22:04:27 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:45.395 22:04:27 -- target/nvmf_vfio_user.sh@34 -- # aerpid=3908790 00:12:45.395 22:04:27 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:45.395 22:04:27 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:45.395 22:04:27 -- common/autotest_common.sh@1251 -- # local i=0 00:12:45.395 22:04:27 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:45.395 22:04:27 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:45.395 22:04:27 -- common/autotest_common.sh@1262 -- # return 0 00:12:45.395 22:04:27 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:45.395 22:04:27 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:45.654 EAL: No free 2048 kB hugepages reported on node 1 00:12:45.654 [2024-04-24 22:04:27.779896] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:45.654 Malloc4 00:12:45.654 22:04:27 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:46.221 [2024-04-24 22:04:28.182222] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:46.221 22:04:28 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:46.221 Asynchronous Event Request test 00:12:46.221 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:46.221 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:46.221 Registering asynchronous event callbacks... 00:12:46.221 Starting namespace attribute notice tests for all controllers... 00:12:46.221 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:46.221 aer_cb - Changed Namespace 00:12:46.221 Cleaning up... 00:12:46.480 [ 00:12:46.480 { 00:12:46.480 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:46.480 "subtype": "Discovery", 00:12:46.480 "listen_addresses": [], 00:12:46.480 "allow_any_host": true, 00:12:46.480 "hosts": [] 00:12:46.480 }, 00:12:46.480 { 00:12:46.480 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:46.480 "subtype": "NVMe", 00:12:46.481 "listen_addresses": [ 00:12:46.481 { 00:12:46.481 "transport": "VFIOUSER", 00:12:46.481 "trtype": "VFIOUSER", 00:12:46.481 "adrfam": "IPv4", 00:12:46.481 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:46.481 "trsvcid": "0" 00:12:46.481 } 00:12:46.481 ], 00:12:46.481 "allow_any_host": true, 00:12:46.481 "hosts": [], 00:12:46.481 "serial_number": "SPDK1", 00:12:46.481 "model_number": "SPDK bdev Controller", 00:12:46.481 "max_namespaces": 32, 00:12:46.481 "min_cntlid": 1, 00:12:46.481 "max_cntlid": 65519, 00:12:46.481 "namespaces": [ 00:12:46.481 { 00:12:46.481 "nsid": 1, 00:12:46.481 "bdev_name": "Malloc1", 00:12:46.481 "name": "Malloc1", 00:12:46.481 "nguid": "B5F9C716840845BA86216137599628A1", 00:12:46.481 "uuid": "b5f9c716-8408-45ba-8621-6137599628a1" 00:12:46.481 }, 00:12:46.481 { 00:12:46.481 "nsid": 2, 00:12:46.481 "bdev_name": "Malloc3", 00:12:46.481 "name": "Malloc3", 00:12:46.481 "nguid": "D4F1D92372B948178AB6CC2A8EA6707F", 00:12:46.481 "uuid": "d4f1d923-72b9-4817-8ab6-cc2a8ea6707f" 00:12:46.481 } 00:12:46.481 ] 00:12:46.481 }, 00:12:46.481 { 00:12:46.481 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:46.481 "subtype": "NVMe", 00:12:46.481 "listen_addresses": [ 00:12:46.481 { 00:12:46.481 "transport": "VFIOUSER", 00:12:46.481 "trtype": "VFIOUSER", 00:12:46.481 "adrfam": "IPv4", 00:12:46.481 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:46.481 "trsvcid": "0" 00:12:46.481 } 00:12:46.481 ], 00:12:46.481 "allow_any_host": true, 00:12:46.481 "hosts": [], 00:12:46.481 "serial_number": "SPDK2", 00:12:46.481 "model_number": "SPDK bdev Controller", 00:12:46.481 "max_namespaces": 32, 00:12:46.481 "min_cntlid": 1, 00:12:46.481 "max_cntlid": 65519, 00:12:46.481 "namespaces": [ 00:12:46.481 { 00:12:46.481 "nsid": 1, 00:12:46.481 "bdev_name": "Malloc2", 00:12:46.481 "name": "Malloc2", 00:12:46.481 "nguid": "B312EEC0128E42CABC6151D87062F72D", 00:12:46.481 "uuid": "b312eec0-128e-42ca-bc61-51d87062f72d" 00:12:46.481 }, 00:12:46.481 { 00:12:46.481 "nsid": 2, 00:12:46.481 "bdev_name": "Malloc4", 00:12:46.481 "name": "Malloc4", 00:12:46.481 "nguid": "9CB21BFFB5AA43CD9EA08AB816007050", 00:12:46.481 "uuid": "9cb21bff-b5aa-43cd-9ea0-8ab816007050" 00:12:46.481 } 00:12:46.481 ] 00:12:46.481 } 00:12:46.481 ] 00:12:46.481 22:04:28 -- target/nvmf_vfio_user.sh@44 -- # wait 3908790 00:12:46.481 22:04:28 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:46.481 22:04:28 -- target/nvmf_vfio_user.sh@95 -- # killprocess 3903214 00:12:46.481 22:04:28 -- common/autotest_common.sh@936 -- # '[' -z 3903214 ']' 00:12:46.481 22:04:28 -- common/autotest_common.sh@940 -- # kill -0 3903214 00:12:46.481 22:04:28 -- common/autotest_common.sh@941 -- # uname 00:12:46.481 22:04:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:46.481 22:04:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3903214 00:12:46.481 22:04:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:46.481 22:04:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:46.481 22:04:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3903214' 00:12:46.481 killing process with pid 3903214 00:12:46.481 22:04:28 -- common/autotest_common.sh@955 -- # kill 3903214 00:12:46.481 [2024-04-24 22:04:28.524010] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:12:46.481 [2024-04-24 22:04:28.524047] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:46.481 22:04:28 -- common/autotest_common.sh@960 -- # wait 3903214 00:12:46.740 22:04:28 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:46.740 22:04:28 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:46.740 22:04:28 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:46.740 22:04:28 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:46.740 22:04:28 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:46.740 22:04:28 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3908938 00:12:46.740 22:04:28 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:46.740 22:04:28 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3908938' 00:12:46.740 Process pid: 3908938 00:12:46.740 22:04:28 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:46.740 22:04:28 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3908938 00:12:46.740 22:04:28 -- common/autotest_common.sh@817 -- # '[' -z 3908938 ']' 00:12:46.740 22:04:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.740 22:04:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:46.740 22:04:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.740 22:04:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:46.740 22:04:28 -- common/autotest_common.sh@10 -- # set +x 00:12:46.740 [2024-04-24 22:04:28.970502] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:46.740 [2024-04-24 22:04:28.971564] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:46.740 [2024-04-24 22:04:28.971637] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.999 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.999 [2024-04-24 22:04:29.035563] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:46.999 [2024-04-24 22:04:29.152010] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.999 [2024-04-24 22:04:29.152076] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.999 [2024-04-24 22:04:29.152101] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.999 [2024-04-24 22:04:29.152116] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.999 [2024-04-24 22:04:29.152128] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.999 [2024-04-24 22:04:29.152232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.999 [2024-04-24 22:04:29.152304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.999 [2024-04-24 22:04:29.152356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:46.999 [2024-04-24 22:04:29.152359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.258 [2024-04-24 22:04:29.259903] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:12:47.258 [2024-04-24 22:04:29.260150] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:12:47.258 [2024-04-24 22:04:29.260459] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:12:47.258 [2024-04-24 22:04:29.261208] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:47.258 [2024-04-24 22:04:29.261365] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:12:47.826 22:04:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:47.826 22:04:29 -- common/autotest_common.sh@850 -- # return 0 00:12:47.826 22:04:29 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:48.764 22:04:30 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:49.333 22:04:31 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:49.333 22:04:31 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:49.333 22:04:31 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:49.333 22:04:31 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:49.333 22:04:31 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:49.902 Malloc1 00:12:49.902 22:04:32 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:50.160 22:04:32 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:50.418 22:04:32 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:50.685 [2024-04-24 22:04:32.885007] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:50.685 22:04:32 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:50.685 22:04:32 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:50.685 22:04:32 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:51.257 Malloc2 00:12:51.257 22:04:33 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:51.515 22:04:33 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:51.515 22:04:33 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:52.085 22:04:34 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:52.085 22:04:34 -- target/nvmf_vfio_user.sh@95 -- # killprocess 3908938 00:12:52.085 22:04:34 -- common/autotest_common.sh@936 -- # '[' -z 3908938 ']' 00:12:52.085 22:04:34 -- common/autotest_common.sh@940 -- # kill -0 3908938 00:12:52.085 22:04:34 -- common/autotest_common.sh@941 -- # uname 00:12:52.085 22:04:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:52.085 22:04:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3908938 00:12:52.085 22:04:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:52.085 22:04:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:52.085 22:04:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3908938' 00:12:52.085 killing process with pid 3908938 00:12:52.085 22:04:34 -- common/autotest_common.sh@955 -- # kill 3908938 00:12:52.085 [2024-04-24 22:04:34.070937] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:52.085 22:04:34 -- common/autotest_common.sh@960 -- # wait 3908938 00:12:52.373 22:04:34 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:52.373 22:04:34 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:52.373 00:12:52.373 real 0m55.329s 00:12:52.373 user 3m37.800s 00:12:52.373 sys 0m5.340s 00:12:52.373 22:04:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:52.373 22:04:34 -- common/autotest_common.sh@10 -- # set +x 00:12:52.373 ************************************ 00:12:52.373 END TEST nvmf_vfio_user 00:12:52.373 ************************************ 00:12:52.373 22:04:34 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:52.373 22:04:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:52.373 22:04:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:52.373 22:04:34 -- common/autotest_common.sh@10 -- # set +x 00:12:52.373 ************************************ 00:12:52.373 START TEST nvmf_vfio_user_nvme_compliance 00:12:52.373 ************************************ 00:12:52.373 22:04:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:52.373 * Looking for test storage... 00:12:52.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:52.373 22:04:34 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.373 22:04:34 -- nvmf/common.sh@7 -- # uname -s 00:12:52.373 22:04:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.373 22:04:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.373 22:04:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.373 22:04:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.373 22:04:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.373 22:04:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.373 22:04:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.373 22:04:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.373 22:04:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.373 22:04:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.373 22:04:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:52.373 22:04:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:52.373 22:04:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.373 22:04:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.373 22:04:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.373 22:04:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.373 22:04:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.373 22:04:34 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.373 22:04:34 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.373 22:04:34 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.373 22:04:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.373 22:04:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.373 22:04:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.373 22:04:34 -- paths/export.sh@5 -- # export PATH 00:12:52.373 22:04:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.373 22:04:34 -- nvmf/common.sh@47 -- # : 0 00:12:52.373 22:04:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:52.373 22:04:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:52.373 22:04:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.373 22:04:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.373 22:04:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.373 22:04:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:52.373 22:04:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:52.373 22:04:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:52.373 22:04:34 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:52.373 22:04:34 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:52.373 22:04:34 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:52.373 22:04:34 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:52.373 22:04:34 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:52.373 22:04:34 -- compliance/compliance.sh@20 -- # nvmfpid=3909676 00:12:52.373 22:04:34 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:52.373 22:04:34 -- compliance/compliance.sh@21 -- # echo 'Process pid: 3909676' 00:12:52.373 Process pid: 3909676 00:12:52.373 22:04:34 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:52.373 22:04:34 -- compliance/compliance.sh@24 -- # waitforlisten 3909676 00:12:52.373 22:04:34 -- common/autotest_common.sh@817 -- # '[' -z 3909676 ']' 00:12:52.373 22:04:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.373 22:04:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:52.373 22:04:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.373 22:04:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:52.373 22:04:34 -- common/autotest_common.sh@10 -- # set +x 00:12:52.632 [2024-04-24 22:04:34.651547] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:52.632 [2024-04-24 22:04:34.651637] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.632 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.632 [2024-04-24 22:04:34.719026] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:52.632 [2024-04-24 22:04:34.840384] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.632 [2024-04-24 22:04:34.840450] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.632 [2024-04-24 22:04:34.840467] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.632 [2024-04-24 22:04:34.840480] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.632 [2024-04-24 22:04:34.840492] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.632 [2024-04-24 22:04:34.840618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.632 [2024-04-24 22:04:34.840672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.632 [2024-04-24 22:04:34.840675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.890 22:04:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:52.890 22:04:34 -- common/autotest_common.sh@850 -- # return 0 00:12:52.890 22:04:34 -- compliance/compliance.sh@26 -- # sleep 1 00:12:53.860 22:04:35 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:53.860 22:04:35 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:53.860 22:04:35 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:53.860 22:04:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:53.860 22:04:35 -- common/autotest_common.sh@10 -- # set +x 00:12:53.860 22:04:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:53.860 22:04:35 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:53.860 22:04:35 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:53.860 22:04:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:53.860 22:04:35 -- common/autotest_common.sh@10 -- # set +x 00:12:53.860 malloc0 00:12:53.860 22:04:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:53.860 22:04:36 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:53.860 22:04:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:53.860 22:04:36 -- common/autotest_common.sh@10 -- # set +x 00:12:53.860 22:04:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:53.860 22:04:36 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:53.860 22:04:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:53.860 22:04:36 -- common/autotest_common.sh@10 -- # set +x 00:12:53.860 22:04:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:53.860 22:04:36 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:53.860 22:04:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:53.860 22:04:36 -- common/autotest_common.sh@10 -- # set +x 00:12:53.860 [2024-04-24 22:04:36.050145] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:53.860 22:04:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:53.860 22:04:36 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:53.860 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.117 00:12:54.117 00:12:54.117 CUnit - A unit testing framework for C - Version 2.1-3 00:12:54.117 http://cunit.sourceforge.net/ 00:12:54.117 00:12:54.117 00:12:54.117 Suite: nvme_compliance 00:12:54.117 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-24 22:04:36.225051] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:54.117 [2024-04-24 22:04:36.226629] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:54.117 [2024-04-24 22:04:36.226659] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:54.117 [2024-04-24 22:04:36.226675] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:54.117 [2024-04-24 22:04:36.228081] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:54.117 passed 00:12:54.117 Test: admin_identify_ctrlr_verify_fused ...[2024-04-24 22:04:36.321835] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:54.117 [2024-04-24 22:04:36.324860] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:54.117 passed 00:12:54.374 Test: admin_identify_ns ...[2024-04-24 22:04:36.414174] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:54.374 [2024-04-24 22:04:36.477412] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:54.374 [2024-04-24 22:04:36.485430] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:54.374 [2024-04-24 22:04:36.513589] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:54.374 passed 00:12:54.374 Test: admin_get_features_mandatory_features ...[2024-04-24 22:04:36.601728] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:54.374 [2024-04-24 22:04:36.604753] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:54.632 passed 00:12:54.632 Test: admin_get_features_optional_features ...[2024-04-24 22:04:36.698375] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:54.632 [2024-04-24 22:04:36.701404] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:54.632 passed 00:12:54.632 Test: admin_set_features_number_of_queues ...[2024-04-24 22:04:36.791062] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:54.889 [2024-04-24 22:04:36.894651] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:54.889 passed 00:12:54.889 Test: admin_get_log_page_mandatory_logs ...[2024-04-24 22:04:36.985758] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:54.889 [2024-04-24 22:04:36.988786] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:54.889 passed 00:12:54.889 Test: admin_get_log_page_with_lpo ...[2024-04-24 22:04:37.079183] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.146 [2024-04-24 22:04:37.146415] ctrlr.c:2604:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:55.146 [2024-04-24 22:04:37.159510] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:55.146 passed 00:12:55.146 Test: fabric_property_get ...[2024-04-24 22:04:37.250602] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.146 [2024-04-24 22:04:37.251921] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:55.146 [2024-04-24 22:04:37.253624] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:55.146 passed 00:12:55.146 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-24 22:04:37.343207] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.146 [2024-04-24 22:04:37.344544] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:55.146 [2024-04-24 22:04:37.348253] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:55.146 passed 00:12:55.404 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-24 22:04:37.433913] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.404 [2024-04-24 22:04:37.517408] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:55.404 [2024-04-24 22:04:37.533408] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:55.404 [2024-04-24 22:04:37.538532] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:55.404 passed 00:12:55.404 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-24 22:04:37.631015] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.404 [2024-04-24 22:04:37.632324] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:55.404 [2024-04-24 22:04:37.634040] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:55.662 passed 00:12:55.662 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-24 22:04:37.725787] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.662 [2024-04-24 22:04:37.801407] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:55.662 [2024-04-24 22:04:37.825409] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:55.662 [2024-04-24 22:04:37.830511] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:55.662 passed 00:12:55.920 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-24 22:04:37.918692] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.920 [2024-04-24 22:04:37.922725] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:55.920 [2024-04-24 22:04:37.922771] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:55.920 [2024-04-24 22:04:37.924747] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:55.920 passed 00:12:55.920 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-24 22:04:38.015424] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.920 [2024-04-24 22:04:38.112404] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:55.920 [2024-04-24 22:04:38.120407] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:55.920 [2024-04-24 22:04:38.128406] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:55.920 [2024-04-24 22:04:38.136421] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:55.920 [2024-04-24 22:04:38.164525] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.179 passed 00:12:56.179 Test: admin_create_io_sq_verify_pc ...[2024-04-24 22:04:38.257071] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:56.179 [2024-04-24 22:04:38.272422] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:56.179 [2024-04-24 22:04:38.288700] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.179 passed 00:12:56.179 Test: admin_create_io_qp_max_qps ...[2024-04-24 22:04:38.379356] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.553 [2024-04-24 22:04:39.482412] nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:12:57.811 [2024-04-24 22:04:39.858199] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.811 passed 00:12:57.811 Test: admin_create_io_sq_shared_cq ...[2024-04-24 22:04:39.950030] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.070 [2024-04-24 22:04:40.081414] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:58.070 [2024-04-24 22:04:40.118535] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.070 passed 00:12:58.070 00:12:58.070 Run Summary: Type Total Ran Passed Failed Inactive 00:12:58.070 suites 1 1 n/a 0 0 00:12:58.070 tests 18 18 18 0 0 00:12:58.070 asserts 360 360 360 0 n/a 00:12:58.070 00:12:58.070 Elapsed time = 1.635 seconds 00:12:58.070 22:04:40 -- compliance/compliance.sh@42 -- # killprocess 3909676 00:12:58.070 22:04:40 -- common/autotest_common.sh@936 -- # '[' -z 3909676 ']' 00:12:58.070 22:04:40 -- common/autotest_common.sh@940 -- # kill -0 3909676 00:12:58.070 22:04:40 -- common/autotest_common.sh@941 -- # uname 00:12:58.070 22:04:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:58.070 22:04:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3909676 00:12:58.070 22:04:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:58.070 22:04:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:58.070 22:04:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3909676' 00:12:58.070 killing process with pid 3909676 00:12:58.070 22:04:40 -- common/autotest_common.sh@955 -- # kill 3909676 00:12:58.070 [2024-04-24 22:04:40.205335] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:58.070 22:04:40 -- common/autotest_common.sh@960 -- # wait 3909676 00:12:58.328 22:04:40 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:58.328 22:04:40 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:58.328 00:12:58.328 real 0m5.991s 00:12:58.328 user 0m16.741s 00:12:58.328 sys 0m0.593s 00:12:58.328 22:04:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:58.328 22:04:40 -- common/autotest_common.sh@10 -- # set +x 00:12:58.328 ************************************ 00:12:58.328 END TEST nvmf_vfio_user_nvme_compliance 00:12:58.328 ************************************ 00:12:58.328 22:04:40 -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:58.328 22:04:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:58.328 22:04:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:58.328 22:04:40 -- common/autotest_common.sh@10 -- # set +x 00:12:58.586 ************************************ 00:12:58.586 START TEST nvmf_vfio_user_fuzz 00:12:58.586 ************************************ 00:12:58.586 22:04:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:58.586 * Looking for test storage... 00:12:58.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:58.586 22:04:40 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:58.586 22:04:40 -- nvmf/common.sh@7 -- # uname -s 00:12:58.586 22:04:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.586 22:04:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.586 22:04:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.586 22:04:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.586 22:04:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.586 22:04:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.586 22:04:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.586 22:04:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.586 22:04:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.586 22:04:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.586 22:04:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:58.586 22:04:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:58.586 22:04:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.586 22:04:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.586 22:04:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:58.586 22:04:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:58.586 22:04:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:58.586 22:04:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.586 22:04:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.586 22:04:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.586 22:04:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.586 22:04:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.586 22:04:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.586 22:04:40 -- paths/export.sh@5 -- # export PATH 00:12:58.586 22:04:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.586 22:04:40 -- nvmf/common.sh@47 -- # : 0 00:12:58.586 22:04:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:58.586 22:04:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:58.586 22:04:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:58.586 22:04:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.586 22:04:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.586 22:04:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:58.586 22:04:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:58.586 22:04:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:58.586 22:04:40 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:58.586 22:04:40 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:58.586 22:04:40 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:58.586 22:04:40 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:58.586 22:04:40 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:58.586 22:04:40 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:58.586 22:04:40 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:58.586 22:04:40 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3910470 00:12:58.586 22:04:40 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:58.586 22:04:40 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3910470' 00:12:58.586 Process pid: 3910470 00:12:58.586 22:04:40 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:58.586 22:04:40 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3910470 00:12:58.586 22:04:40 -- common/autotest_common.sh@817 -- # '[' -z 3910470 ']' 00:12:58.586 22:04:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.586 22:04:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:58.586 22:04:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.586 22:04:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:58.586 22:04:40 -- common/autotest_common.sh@10 -- # set +x 00:12:59.151 22:04:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:59.151 22:04:41 -- common/autotest_common.sh@850 -- # return 0 00:12:59.151 22:04:41 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:00.083 22:04:42 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:00.083 22:04:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.083 22:04:42 -- common/autotest_common.sh@10 -- # set +x 00:13:00.083 22:04:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.083 22:04:42 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:00.083 22:04:42 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:00.083 22:04:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.083 22:04:42 -- common/autotest_common.sh@10 -- # set +x 00:13:00.083 malloc0 00:13:00.083 22:04:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.083 22:04:42 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:00.083 22:04:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.083 22:04:42 -- common/autotest_common.sh@10 -- # set +x 00:13:00.083 22:04:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.083 22:04:42 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:00.083 22:04:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.083 22:04:42 -- common/autotest_common.sh@10 -- # set +x 00:13:00.083 22:04:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.084 22:04:42 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:00.084 22:04:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.084 22:04:42 -- common/autotest_common.sh@10 -- # set +x 00:13:00.084 22:04:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.084 22:04:42 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:00.084 22:04:42 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:32.147 Fuzzing completed. Shutting down the fuzz application 00:13:32.147 00:13:32.148 Dumping successful admin opcodes: 00:13:32.148 8, 9, 10, 24, 00:13:32.148 Dumping successful io opcodes: 00:13:32.148 0, 00:13:32.148 NS: 0x200003a1ef00 I/O qp, Total commands completed: 562010, total successful commands: 2159, random_seed: 10654912 00:13:32.148 NS: 0x200003a1ef00 admin qp, Total commands completed: 86078, total successful commands: 690, random_seed: 4103946432 00:13:32.148 22:05:13 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:32.148 22:05:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:32.148 22:05:13 -- common/autotest_common.sh@10 -- # set +x 00:13:32.148 22:05:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:32.148 22:05:13 -- target/vfio_user_fuzz.sh@46 -- # killprocess 3910470 00:13:32.148 22:05:13 -- common/autotest_common.sh@936 -- # '[' -z 3910470 ']' 00:13:32.148 22:05:13 -- common/autotest_common.sh@940 -- # kill -0 3910470 00:13:32.148 22:05:13 -- common/autotest_common.sh@941 -- # uname 00:13:32.148 22:05:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:32.148 22:05:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3910470 00:13:32.148 22:05:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:32.148 22:05:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:32.148 22:05:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3910470' 00:13:32.148 killing process with pid 3910470 00:13:32.148 22:05:13 -- common/autotest_common.sh@955 -- # kill 3910470 00:13:32.148 22:05:13 -- common/autotest_common.sh@960 -- # wait 3910470 00:13:32.148 22:05:14 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:32.148 22:05:14 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:32.148 00:13:32.148 real 0m33.431s 00:13:32.148 user 0m32.547s 00:13:32.148 sys 0m28.168s 00:13:32.148 22:05:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:32.148 22:05:14 -- common/autotest_common.sh@10 -- # set +x 00:13:32.148 ************************************ 00:13:32.148 END TEST nvmf_vfio_user_fuzz 00:13:32.148 ************************************ 00:13:32.148 22:05:14 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:32.148 22:05:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:32.148 22:05:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:32.148 22:05:14 -- common/autotest_common.sh@10 -- # set +x 00:13:32.148 ************************************ 00:13:32.148 START TEST nvmf_host_management 00:13:32.148 ************************************ 00:13:32.148 22:05:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:32.148 * Looking for test storage... 00:13:32.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:32.148 22:05:14 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:32.148 22:05:14 -- nvmf/common.sh@7 -- # uname -s 00:13:32.148 22:05:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.148 22:05:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.148 22:05:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.148 22:05:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.148 22:05:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.148 22:05:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.148 22:05:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.148 22:05:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.148 22:05:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.148 22:05:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.148 22:05:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:32.148 22:05:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:32.148 22:05:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.148 22:05:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.148 22:05:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:32.148 22:05:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:32.148 22:05:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:32.148 22:05:14 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.148 22:05:14 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.148 22:05:14 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.148 22:05:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.148 22:05:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.148 22:05:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.148 22:05:14 -- paths/export.sh@5 -- # export PATH 00:13:32.148 22:05:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.148 22:05:14 -- nvmf/common.sh@47 -- # : 0 00:13:32.148 22:05:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:32.148 22:05:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:32.148 22:05:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:32.148 22:05:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.148 22:05:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.148 22:05:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:32.148 22:05:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:32.148 22:05:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:32.148 22:05:14 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:32.148 22:05:14 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:32.148 22:05:14 -- target/host_management.sh@105 -- # nvmftestinit 00:13:32.148 22:05:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:32.148 22:05:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:32.148 22:05:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:32.148 22:05:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:32.148 22:05:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:32.148 22:05:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.148 22:05:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:32.148 22:05:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.148 22:05:14 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:32.148 22:05:14 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:32.148 22:05:14 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:32.148 22:05:14 -- common/autotest_common.sh@10 -- # set +x 00:13:34.678 22:05:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:34.678 22:05:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:34.678 22:05:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:34.678 22:05:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:34.678 22:05:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:34.678 22:05:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:34.678 22:05:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:34.678 22:05:16 -- nvmf/common.sh@295 -- # net_devs=() 00:13:34.678 22:05:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:34.678 22:05:16 -- nvmf/common.sh@296 -- # e810=() 00:13:34.678 22:05:16 -- nvmf/common.sh@296 -- # local -ga e810 00:13:34.678 22:05:16 -- nvmf/common.sh@297 -- # x722=() 00:13:34.678 22:05:16 -- nvmf/common.sh@297 -- # local -ga x722 00:13:34.678 22:05:16 -- nvmf/common.sh@298 -- # mlx=() 00:13:34.678 22:05:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:34.678 22:05:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:34.678 22:05:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:34.678 22:05:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:34.678 22:05:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:34.678 22:05:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:34.678 22:05:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:34.678 22:05:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:34.678 22:05:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:34.678 22:05:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:34.678 22:05:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:34.678 22:05:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:34.678 22:05:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:34.678 22:05:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:34.678 22:05:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:34.678 22:05:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:34.678 22:05:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:34.678 22:05:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:34.678 22:05:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:34.678 22:05:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:34.678 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:34.678 22:05:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:34.678 22:05:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:34.678 22:05:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.678 22:05:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.678 22:05:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:34.678 22:05:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:34.678 22:05:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:34.678 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:34.678 22:05:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:34.678 22:05:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:34.678 22:05:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.678 22:05:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.678 22:05:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:34.678 22:05:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:34.678 22:05:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:34.678 22:05:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:34.678 22:05:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:34.678 22:05:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.678 22:05:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:34.678 22:05:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.678 22:05:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:34.678 Found net devices under 0000:84:00.0: cvl_0_0 00:13:34.678 22:05:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.678 22:05:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:34.678 22:05:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.678 22:05:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:34.678 22:05:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.678 22:05:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:34.678 Found net devices under 0000:84:00.1: cvl_0_1 00:13:34.678 22:05:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.678 22:05:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:34.679 22:05:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:34.679 22:05:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:34.679 22:05:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:34.679 22:05:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:34.679 22:05:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:34.679 22:05:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:34.679 22:05:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:34.679 22:05:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:34.679 22:05:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:34.679 22:05:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:34.679 22:05:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:34.679 22:05:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:34.679 22:05:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:34.679 22:05:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:34.679 22:05:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:34.679 22:05:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:34.679 22:05:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:34.679 22:05:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:34.679 22:05:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:34.679 22:05:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:34.679 22:05:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:34.679 22:05:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:34.679 22:05:16 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:34.679 22:05:16 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:34.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:34.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:13:34.679 00:13:34.679 --- 10.0.0.2 ping statistics --- 00:13:34.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.679 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:13:34.679 22:05:16 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:34.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:34.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:13:34.679 00:13:34.679 --- 10.0.0.1 ping statistics --- 00:13:34.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.679 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:13:34.679 22:05:16 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:34.679 22:05:16 -- nvmf/common.sh@411 -- # return 0 00:13:34.679 22:05:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:34.679 22:05:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:34.679 22:05:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:34.679 22:05:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:34.679 22:05:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:34.679 22:05:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:34.679 22:05:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:34.679 22:05:16 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:13:34.679 22:05:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:34.679 22:05:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:34.679 22:05:16 -- common/autotest_common.sh@10 -- # set +x 00:13:34.937 ************************************ 00:13:34.937 START TEST nvmf_host_management 00:13:34.937 ************************************ 00:13:34.937 22:05:16 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:13:34.937 22:05:16 -- target/host_management.sh@69 -- # starttarget 00:13:34.937 22:05:16 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:34.937 22:05:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:34.937 22:05:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:34.937 22:05:16 -- common/autotest_common.sh@10 -- # set +x 00:13:34.937 22:05:16 -- nvmf/common.sh@470 -- # nvmfpid=3916139 00:13:34.937 22:05:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:34.937 22:05:16 -- nvmf/common.sh@471 -- # waitforlisten 3916139 00:13:34.937 22:05:16 -- common/autotest_common.sh@817 -- # '[' -z 3916139 ']' 00:13:34.937 22:05:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.937 22:05:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:34.937 22:05:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.937 22:05:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:34.937 22:05:16 -- common/autotest_common.sh@10 -- # set +x 00:13:34.937 [2024-04-24 22:05:16.998211] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:34.937 [2024-04-24 22:05:16.998313] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.937 EAL: No free 2048 kB hugepages reported on node 1 00:13:34.937 [2024-04-24 22:05:17.087404] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:35.196 [2024-04-24 22:05:17.231716] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.196 [2024-04-24 22:05:17.231786] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.196 [2024-04-24 22:05:17.231805] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:35.196 [2024-04-24 22:05:17.231821] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:35.196 [2024-04-24 22:05:17.231835] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.196 [2024-04-24 22:05:17.231970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:35.196 [2024-04-24 22:05:17.232026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:35.196 [2024-04-24 22:05:17.232076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:35.196 [2024-04-24 22:05:17.232080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.196 22:05:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:35.196 22:05:17 -- common/autotest_common.sh@850 -- # return 0 00:13:35.196 22:05:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:35.197 22:05:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:35.197 22:05:17 -- common/autotest_common.sh@10 -- # set +x 00:13:35.197 22:05:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.197 22:05:17 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:35.197 22:05:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:35.197 22:05:17 -- common/autotest_common.sh@10 -- # set +x 00:13:35.197 [2024-04-24 22:05:17.395378] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:35.197 22:05:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:35.197 22:05:17 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:35.197 22:05:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:35.197 22:05:17 -- common/autotest_common.sh@10 -- # set +x 00:13:35.197 22:05:17 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:35.197 22:05:17 -- target/host_management.sh@23 -- # cat 00:13:35.197 22:05:17 -- target/host_management.sh@30 -- # rpc_cmd 00:13:35.197 22:05:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:35.197 22:05:17 -- common/autotest_common.sh@10 -- # set +x 00:13:35.197 Malloc0 00:13:35.455 [2024-04-24 22:05:17.455377] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:35.455 [2024-04-24 22:05:17.455745] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.455 22:05:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:35.455 22:05:17 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:35.455 22:05:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:35.455 22:05:17 -- common/autotest_common.sh@10 -- # set +x 00:13:35.455 22:05:17 -- target/host_management.sh@73 -- # perfpid=3916192 00:13:35.455 22:05:17 -- target/host_management.sh@74 -- # waitforlisten 3916192 /var/tmp/bdevperf.sock 00:13:35.455 22:05:17 -- common/autotest_common.sh@817 -- # '[' -z 3916192 ']' 00:13:35.455 22:05:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:35.455 22:05:17 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:35.455 22:05:17 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:35.455 22:05:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:35.455 22:05:17 -- nvmf/common.sh@521 -- # config=() 00:13:35.455 22:05:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:35.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:35.455 22:05:17 -- nvmf/common.sh@521 -- # local subsystem config 00:13:35.455 22:05:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:35.455 22:05:17 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:35.455 22:05:17 -- common/autotest_common.sh@10 -- # set +x 00:13:35.455 22:05:17 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:35.455 { 00:13:35.455 "params": { 00:13:35.455 "name": "Nvme$subsystem", 00:13:35.455 "trtype": "$TEST_TRANSPORT", 00:13:35.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:35.455 "adrfam": "ipv4", 00:13:35.455 "trsvcid": "$NVMF_PORT", 00:13:35.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:35.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:35.455 "hdgst": ${hdgst:-false}, 00:13:35.455 "ddgst": ${ddgst:-false} 00:13:35.455 }, 00:13:35.455 "method": "bdev_nvme_attach_controller" 00:13:35.455 } 00:13:35.455 EOF 00:13:35.455 )") 00:13:35.455 22:05:17 -- nvmf/common.sh@543 -- # cat 00:13:35.455 22:05:17 -- nvmf/common.sh@545 -- # jq . 00:13:35.455 22:05:17 -- nvmf/common.sh@546 -- # IFS=, 00:13:35.455 22:05:17 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:35.455 "params": { 00:13:35.455 "name": "Nvme0", 00:13:35.455 "trtype": "tcp", 00:13:35.455 "traddr": "10.0.0.2", 00:13:35.455 "adrfam": "ipv4", 00:13:35.455 "trsvcid": "4420", 00:13:35.455 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:35.455 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:35.455 "hdgst": false, 00:13:35.455 "ddgst": false 00:13:35.455 }, 00:13:35.455 "method": "bdev_nvme_attach_controller" 00:13:35.455 }' 00:13:35.455 [2024-04-24 22:05:17.576260] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:35.455 [2024-04-24 22:05:17.576453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3916192 ] 00:13:35.455 EAL: No free 2048 kB hugepages reported on node 1 00:13:35.455 [2024-04-24 22:05:17.688340] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.713 [2024-04-24 22:05:17.811753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.972 Running I/O for 10 seconds... 00:13:35.972 22:05:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:35.972 22:05:18 -- common/autotest_common.sh@850 -- # return 0 00:13:35.972 22:05:18 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:35.972 22:05:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:35.972 22:05:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.972 22:05:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:35.972 22:05:18 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:35.972 22:05:18 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:35.972 22:05:18 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:35.972 22:05:18 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:35.972 22:05:18 -- target/host_management.sh@52 -- # local ret=1 00:13:35.972 22:05:18 -- target/host_management.sh@53 -- # local i 00:13:35.972 22:05:18 -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:35.972 22:05:18 -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:35.972 22:05:18 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:35.972 22:05:18 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:35.972 22:05:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:35.972 22:05:18 -- common/autotest_common.sh@10 -- # set +x 00:13:35.972 22:05:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:35.972 22:05:18 -- target/host_management.sh@55 -- # read_io_count=67 00:13:35.972 22:05:18 -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:13:35.972 22:05:18 -- target/host_management.sh@62 -- # sleep 0.25 00:13:36.231 22:05:18 -- target/host_management.sh@54 -- # (( i-- )) 00:13:36.231 22:05:18 -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:36.231 22:05:18 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:36.231 22:05:18 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:36.231 22:05:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.231 22:05:18 -- common/autotest_common.sh@10 -- # set +x 00:13:36.231 22:05:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.231 22:05:18 -- target/host_management.sh@55 -- # read_io_count=451 00:13:36.231 22:05:18 -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:13:36.231 22:05:18 -- target/host_management.sh@59 -- # ret=0 00:13:36.231 22:05:18 -- target/host_management.sh@60 -- # break 00:13:36.231 22:05:18 -- target/host_management.sh@64 -- # return 0 00:13:36.231 22:05:18 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:36.231 22:05:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.231 22:05:18 -- common/autotest_common.sh@10 -- # set +x 00:13:36.231 [2024-04-24 22:05:18.479138] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.231 [2024-04-24 22:05:18.479236] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479254] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479268] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479282] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479312] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479327] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479341] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479355] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479370] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479406] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479421] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479435] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479457] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479471] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479484] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479498] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479511] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479525] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479539] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479564] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479606] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479619] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479633] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479647] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479675] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479689] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479702] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479725] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479740] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479753] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479781] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479795] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479822] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479836] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479863] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479876] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479932] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479951] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479971] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479985] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.479998] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.480012] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.480026] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.480040] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.480054] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.480068] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.480082] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.480096] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.480109] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.480123] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.480137] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7c90 is same with the state(5) to be set 00:13:36.232 [2024-04-24 22:05:18.480283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.232 [2024-04-24 22:05:18.480339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.232 [2024-04-24 22:05:18.480373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.232 [2024-04-24 22:05:18.480390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.232 [2024-04-24 22:05:18.480422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.232 [2024-04-24 22:05:18.480439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.232 [2024-04-24 22:05:18.480455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.232 [2024-04-24 22:05:18.480470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.232 [2024-04-24 22:05:18.480487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.232 [2024-04-24 22:05:18.480501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.232 [2024-04-24 22:05:18.480518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.232 [2024-04-24 22:05:18.480542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.232 [2024-04-24 22:05:18.480566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.232 [2024-04-24 22:05:18.480582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.232 [2024-04-24 22:05:18.480599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.232 [2024-04-24 22:05:18.480614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.232 [2024-04-24 22:05:18.480631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.232 [2024-04-24 22:05:18.480647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.232 [2024-04-24 22:05:18.480664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.232 [2024-04-24 22:05:18.480679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.232 [2024-04-24 22:05:18.480696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.232 [2024-04-24 22:05:18.480711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.232 [2024-04-24 22:05:18.480728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.480744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.480761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.480777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.480793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.480809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.480826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.480842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.480859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.480875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.480892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.480907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.480924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.480940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.480957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.480976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.480994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.481978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.481993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.233 [2024-04-24 22:05:18.482010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.233 [2024-04-24 22:05:18.482025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.234 [2024-04-24 22:05:18.482041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.234 [2024-04-24 22:05:18.482056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.234 [2024-04-24 22:05:18.482073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.234 [2024-04-24 22:05:18.482088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.234 [2024-04-24 22:05:18.482105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.234 [2024-04-24 22:05:18.482120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.234 [2024-04-24 22:05:18.482138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.234 [2024-04-24 22:05:18.482153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.234 [2024-04-24 22:05:18.482170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.234 [2024-04-24 22:05:18.482186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.234 [2024-04-24 22:05:18.482206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.234 [2024-04-24 22:05:18.482222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.234 [2024-04-24 22:05:18.482239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.234 [2024-04-24 22:05:18.482254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.234 [2024-04-24 22:05:18.482271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.234 [2024-04-24 22:05:18.482286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.234 [2024-04-24 22:05:18.482303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.234 [2024-04-24 22:05:18.482317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.234 [2024-04-24 22:05:18.482334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.234 [2024-04-24 22:05:18.482349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.234 [2024-04-24 22:05:18.482366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.234 [2024-04-24 22:05:18.482381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.234 [2024-04-24 22:05:18.482404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.234 [2024-04-24 22:05:18.482421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.234 [2024-04-24 22:05:18.482438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:36.234 [2024-04-24 22:05:18.482453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.234 [2024-04-24 22:05:18.482470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a80c0 is same with the state(5) to be set 00:13:36.234 [2024-04-24 22:05:18.482554] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23a80c0 was disconnected and freed. reset controller. 00:13:36.234 22:05:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.234 22:05:18 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:36.234 [2024-04-24 22:05:18.483833] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:36.234 22:05:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:36.234 22:05:18 -- common/autotest_common.sh@10 -- # set +x 00:13:36.234 task offset: 65536 on job bdev=Nvme0n1 fails 00:13:36.234 00:13:36.234 Latency(us) 00:13:36.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.234 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:36.234 Job: Nvme0n1 ended in about 0.44 seconds with error 00:13:36.234 Verification LBA range: start 0x0 length 0x400 00:13:36.234 Nvme0n1 : 0.44 1165.13 72.82 145.64 0.00 47473.52 6747.78 38253.61 00:13:36.234 =================================================================================================================== 00:13:36.234 Total : 1165.13 72.82 145.64 0.00 47473.52 6747.78 38253.61 00:13:36.492 [2024-04-24 22:05:18.486157] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:36.492 [2024-04-24 22:05:18.486192] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f973e0 (9): Bad file descriptor 00:13:36.492 22:05:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:36.492 22:05:18 -- target/host_management.sh@87 -- # sleep 1 00:13:36.492 [2024-04-24 22:05:18.618578] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:37.425 22:05:19 -- target/host_management.sh@91 -- # kill -9 3916192 00:13:37.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3916192) - No such process 00:13:37.425 22:05:19 -- target/host_management.sh@91 -- # true 00:13:37.425 22:05:19 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:37.425 22:05:19 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:37.425 22:05:19 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:37.425 22:05:19 -- nvmf/common.sh@521 -- # config=() 00:13:37.425 22:05:19 -- nvmf/common.sh@521 -- # local subsystem config 00:13:37.425 22:05:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:37.425 22:05:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:37.425 { 00:13:37.425 "params": { 00:13:37.425 "name": "Nvme$subsystem", 00:13:37.425 "trtype": "$TEST_TRANSPORT", 00:13:37.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:37.425 "adrfam": "ipv4", 00:13:37.425 "trsvcid": "$NVMF_PORT", 00:13:37.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:37.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:37.425 "hdgst": ${hdgst:-false}, 00:13:37.425 "ddgst": ${ddgst:-false} 00:13:37.425 }, 00:13:37.425 "method": "bdev_nvme_attach_controller" 00:13:37.425 } 00:13:37.425 EOF 00:13:37.425 )") 00:13:37.425 22:05:19 -- nvmf/common.sh@543 -- # cat 00:13:37.425 22:05:19 -- nvmf/common.sh@545 -- # jq . 00:13:37.425 22:05:19 -- nvmf/common.sh@546 -- # IFS=, 00:13:37.425 22:05:19 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:37.425 "params": { 00:13:37.425 "name": "Nvme0", 00:13:37.425 "trtype": "tcp", 00:13:37.425 "traddr": "10.0.0.2", 00:13:37.425 "adrfam": "ipv4", 00:13:37.425 "trsvcid": "4420", 00:13:37.425 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:37.425 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:37.425 "hdgst": false, 00:13:37.425 "ddgst": false 00:13:37.425 }, 00:13:37.425 "method": "bdev_nvme_attach_controller" 00:13:37.425 }' 00:13:37.425 [2024-04-24 22:05:19.542041] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:37.425 [2024-04-24 22:05:19.542130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3916463 ] 00:13:37.425 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.425 [2024-04-24 22:05:19.612616] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.683 [2024-04-24 22:05:19.736837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.943 Running I/O for 1 seconds... 00:13:38.906 00:13:38.906 Latency(us) 00:13:38.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.906 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:38.906 Verification LBA range: start 0x0 length 0x400 00:13:38.906 Nvme0n1 : 1.02 1375.26 85.95 0.00 0.00 45767.58 10485.76 37671.06 00:13:38.906 =================================================================================================================== 00:13:38.906 Total : 1375.26 85.95 0.00 0.00 45767.58 10485.76 37671.06 00:13:39.164 22:05:21 -- target/host_management.sh@102 -- # stoptarget 00:13:39.164 22:05:21 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:39.164 22:05:21 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:39.164 22:05:21 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:39.164 22:05:21 -- target/host_management.sh@40 -- # nvmftestfini 00:13:39.164 22:05:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:39.164 22:05:21 -- nvmf/common.sh@117 -- # sync 00:13:39.164 22:05:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:39.164 22:05:21 -- nvmf/common.sh@120 -- # set +e 00:13:39.164 22:05:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:39.164 22:05:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:39.164 rmmod nvme_tcp 00:13:39.164 rmmod nvme_fabrics 00:13:39.164 rmmod nvme_keyring 00:13:39.164 22:05:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:39.164 22:05:21 -- nvmf/common.sh@124 -- # set -e 00:13:39.164 22:05:21 -- nvmf/common.sh@125 -- # return 0 00:13:39.164 22:05:21 -- nvmf/common.sh@478 -- # '[' -n 3916139 ']' 00:13:39.164 22:05:21 -- nvmf/common.sh@479 -- # killprocess 3916139 00:13:39.164 22:05:21 -- common/autotest_common.sh@936 -- # '[' -z 3916139 ']' 00:13:39.164 22:05:21 -- common/autotest_common.sh@940 -- # kill -0 3916139 00:13:39.164 22:05:21 -- common/autotest_common.sh@941 -- # uname 00:13:39.164 22:05:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:39.164 22:05:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3916139 00:13:39.422 22:05:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:39.422 22:05:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:39.422 22:05:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3916139' 00:13:39.422 killing process with pid 3916139 00:13:39.422 22:05:21 -- common/autotest_common.sh@955 -- # kill 3916139 00:13:39.422 [2024-04-24 22:05:21.425625] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:39.422 22:05:21 -- common/autotest_common.sh@960 -- # wait 3916139 00:13:39.680 [2024-04-24 22:05:21.705099] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:39.680 22:05:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:39.680 22:05:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:39.680 22:05:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:39.680 22:05:21 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:39.680 22:05:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:39.680 22:05:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.680 22:05:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.680 22:05:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.578 22:05:23 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:41.578 00:13:41.578 real 0m6.843s 00:13:41.578 user 0m20.718s 00:13:41.578 sys 0m1.392s 00:13:41.578 22:05:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:41.578 22:05:23 -- common/autotest_common.sh@10 -- # set +x 00:13:41.578 ************************************ 00:13:41.578 END TEST nvmf_host_management 00:13:41.578 ************************************ 00:13:41.578 22:05:23 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:41.578 00:13:41.578 real 0m9.559s 00:13:41.578 user 0m21.602s 00:13:41.578 sys 0m3.247s 00:13:41.578 22:05:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:41.578 22:05:23 -- common/autotest_common.sh@10 -- # set +x 00:13:41.578 ************************************ 00:13:41.578 END TEST nvmf_host_management 00:13:41.578 ************************************ 00:13:41.578 22:05:23 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:41.578 22:05:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:41.578 22:05:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:41.578 22:05:23 -- common/autotest_common.sh@10 -- # set +x 00:13:41.836 ************************************ 00:13:41.836 START TEST nvmf_lvol 00:13:41.836 ************************************ 00:13:41.836 22:05:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:41.836 * Looking for test storage... 00:13:41.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:41.836 22:05:24 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:41.836 22:05:24 -- nvmf/common.sh@7 -- # uname -s 00:13:41.836 22:05:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.836 22:05:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.836 22:05:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.836 22:05:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.836 22:05:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.836 22:05:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.836 22:05:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.836 22:05:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.836 22:05:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.836 22:05:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.836 22:05:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:41.836 22:05:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:41.836 22:05:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.836 22:05:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.836 22:05:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:41.836 22:05:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:41.836 22:05:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:41.836 22:05:24 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.836 22:05:24 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.836 22:05:24 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.836 22:05:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.836 22:05:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.836 22:05:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.836 22:05:24 -- paths/export.sh@5 -- # export PATH 00:13:41.836 22:05:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.836 22:05:24 -- nvmf/common.sh@47 -- # : 0 00:13:41.836 22:05:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:41.836 22:05:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:41.836 22:05:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:41.836 22:05:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.836 22:05:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.836 22:05:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:41.836 22:05:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:41.836 22:05:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:41.836 22:05:24 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:41.836 22:05:24 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:41.836 22:05:24 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:41.836 22:05:24 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:41.836 22:05:24 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:41.836 22:05:24 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:41.836 22:05:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:41.836 22:05:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:41.836 22:05:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:41.836 22:05:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:41.836 22:05:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:41.836 22:05:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.836 22:05:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.836 22:05:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.836 22:05:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:41.836 22:05:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:41.836 22:05:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:41.837 22:05:24 -- common/autotest_common.sh@10 -- # set +x 00:13:44.362 22:05:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:44.362 22:05:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:44.362 22:05:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:44.362 22:05:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:44.362 22:05:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:44.362 22:05:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:44.362 22:05:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:44.362 22:05:26 -- nvmf/common.sh@295 -- # net_devs=() 00:13:44.362 22:05:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:44.362 22:05:26 -- nvmf/common.sh@296 -- # e810=() 00:13:44.362 22:05:26 -- nvmf/common.sh@296 -- # local -ga e810 00:13:44.362 22:05:26 -- nvmf/common.sh@297 -- # x722=() 00:13:44.362 22:05:26 -- nvmf/common.sh@297 -- # local -ga x722 00:13:44.362 22:05:26 -- nvmf/common.sh@298 -- # mlx=() 00:13:44.362 22:05:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:44.362 22:05:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:44.362 22:05:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:44.362 22:05:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:44.362 22:05:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:44.362 22:05:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:44.362 22:05:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:44.362 22:05:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:44.362 22:05:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:44.362 22:05:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:44.362 22:05:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:44.362 22:05:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:44.362 22:05:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:44.362 22:05:26 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:44.362 22:05:26 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:44.362 22:05:26 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:44.362 22:05:26 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:44.362 22:05:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:44.362 22:05:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:44.362 22:05:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:44.362 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:44.362 22:05:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:44.362 22:05:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:44.362 22:05:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.362 22:05:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.362 22:05:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:44.362 22:05:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:44.362 22:05:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:44.362 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:44.362 22:05:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:44.362 22:05:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:44.362 22:05:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.362 22:05:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.362 22:05:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:44.362 22:05:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:44.362 22:05:26 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:44.362 22:05:26 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:44.362 22:05:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:44.362 22:05:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.362 22:05:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:44.362 22:05:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.362 22:05:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:44.362 Found net devices under 0000:84:00.0: cvl_0_0 00:13:44.362 22:05:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.362 22:05:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:44.362 22:05:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.362 22:05:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:44.362 22:05:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.362 22:05:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:44.362 Found net devices under 0000:84:00.1: cvl_0_1 00:13:44.362 22:05:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.362 22:05:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:44.362 22:05:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:44.362 22:05:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:44.362 22:05:26 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:44.362 22:05:26 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:44.362 22:05:26 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:44.362 22:05:26 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:44.362 22:05:26 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:44.362 22:05:26 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:44.362 22:05:26 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:44.363 22:05:26 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:44.363 22:05:26 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:44.363 22:05:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:44.363 22:05:26 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:44.363 22:05:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:44.363 22:05:26 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:44.363 22:05:26 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:44.363 22:05:26 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:44.363 22:05:26 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:44.363 22:05:26 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:44.363 22:05:26 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:44.363 22:05:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:44.363 22:05:26 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:44.363 22:05:26 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:44.363 22:05:26 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:44.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:44.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:13:44.363 00:13:44.363 --- 10.0.0.2 ping statistics --- 00:13:44.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.363 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:13:44.363 22:05:26 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:44.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:44.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:13:44.363 00:13:44.363 --- 10.0.0.1 ping statistics --- 00:13:44.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.363 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:13:44.363 22:05:26 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:44.363 22:05:26 -- nvmf/common.sh@411 -- # return 0 00:13:44.363 22:05:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:44.363 22:05:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:44.363 22:05:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:44.363 22:05:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:44.363 22:05:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:44.363 22:05:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:44.363 22:05:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:44.363 22:05:26 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:44.363 22:05:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:44.363 22:05:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:44.363 22:05:26 -- common/autotest_common.sh@10 -- # set +x 00:13:44.363 22:05:26 -- nvmf/common.sh@470 -- # nvmfpid=3918705 00:13:44.363 22:05:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:44.363 22:05:26 -- nvmf/common.sh@471 -- # waitforlisten 3918705 00:13:44.363 22:05:26 -- common/autotest_common.sh@817 -- # '[' -z 3918705 ']' 00:13:44.363 22:05:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.363 22:05:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:44.363 22:05:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.363 22:05:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:44.363 22:05:26 -- common/autotest_common.sh@10 -- # set +x 00:13:44.363 [2024-04-24 22:05:26.421419] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:44.363 [2024-04-24 22:05:26.421507] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.363 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.363 [2024-04-24 22:05:26.501121] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:44.620 [2024-04-24 22:05:26.625747] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.620 [2024-04-24 22:05:26.625803] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.620 [2024-04-24 22:05:26.625820] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.620 [2024-04-24 22:05:26.625834] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.620 [2024-04-24 22:05:26.625845] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.620 [2024-04-24 22:05:26.625905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.620 [2024-04-24 22:05:26.625960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:44.620 [2024-04-24 22:05:26.625965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.620 22:05:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:44.620 22:05:26 -- common/autotest_common.sh@850 -- # return 0 00:13:44.620 22:05:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:44.620 22:05:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:44.620 22:05:26 -- common/autotest_common.sh@10 -- # set +x 00:13:44.620 22:05:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.620 22:05:26 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:44.877 [2024-04-24 22:05:27.058414] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.877 22:05:27 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:45.441 22:05:27 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:45.441 22:05:27 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:46.006 22:05:27 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:46.006 22:05:27 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:46.006 22:05:28 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:46.571 22:05:28 -- target/nvmf_lvol.sh@29 -- # lvs=312fa625-ed1e-4dc8-9b30-77d4a9899819 00:13:46.571 22:05:28 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 312fa625-ed1e-4dc8-9b30-77d4a9899819 lvol 20 00:13:47.137 22:05:29 -- target/nvmf_lvol.sh@32 -- # lvol=dd0543e4-66db-4f16-aa8a-91e94c06bef9 00:13:47.137 22:05:29 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:47.703 22:05:29 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dd0543e4-66db-4f16-aa8a-91e94c06bef9 00:13:48.268 22:05:30 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:48.525 [2024-04-24 22:05:30.699944] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:48.525 [2024-04-24 22:05:30.700272] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.526 22:05:30 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:49.089 22:05:31 -- target/nvmf_lvol.sh@42 -- # perf_pid=3919263 00:13:49.090 22:05:31 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:49.090 22:05:31 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:49.090 EAL: No free 2048 kB hugepages reported on node 1 00:13:50.067 22:05:32 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot dd0543e4-66db-4f16-aa8a-91e94c06bef9 MY_SNAPSHOT 00:13:50.326 22:05:32 -- target/nvmf_lvol.sh@47 -- # snapshot=dc9c83b8-3fa4-44a6-9bd8-4520e6096602 00:13:50.326 22:05:32 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize dd0543e4-66db-4f16-aa8a-91e94c06bef9 30 00:13:50.891 22:05:32 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone dc9c83b8-3fa4-44a6-9bd8-4520e6096602 MY_CLONE 00:13:51.149 22:05:33 -- target/nvmf_lvol.sh@49 -- # clone=f4c51a44-89e6-4cb3-b414-f8d7736637aa 00:13:51.149 22:05:33 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f4c51a44-89e6-4cb3-b414-f8d7736637aa 00:13:52.083 22:05:34 -- target/nvmf_lvol.sh@53 -- # wait 3919263 00:14:00.191 Initializing NVMe Controllers 00:14:00.191 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:00.191 Controller IO queue size 128, less than required. 00:14:00.191 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:00.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:00.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:00.191 Initialization complete. Launching workers. 00:14:00.191 ======================================================== 00:14:00.191 Latency(us) 00:14:00.191 Device Information : IOPS MiB/s Average min max 00:14:00.191 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9546.60 37.29 13409.65 2202.96 88945.42 00:14:00.191 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9408.60 36.75 13616.82 2265.38 75768.27 00:14:00.191 ======================================================== 00:14:00.191 Total : 18955.20 74.04 13512.48 2202.96 88945.42 00:14:00.191 00:14:00.191 22:05:41 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:00.191 22:05:41 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dd0543e4-66db-4f16-aa8a-91e94c06bef9 00:14:00.191 22:05:42 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 312fa625-ed1e-4dc8-9b30-77d4a9899819 00:14:00.449 22:05:42 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:00.449 22:05:42 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:00.449 22:05:42 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:00.449 22:05:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:00.449 22:05:42 -- nvmf/common.sh@117 -- # sync 00:14:00.449 22:05:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:00.449 22:05:42 -- nvmf/common.sh@120 -- # set +e 00:14:00.449 22:05:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:00.449 22:05:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:00.449 rmmod nvme_tcp 00:14:00.449 rmmod nvme_fabrics 00:14:00.449 rmmod nvme_keyring 00:14:00.449 22:05:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:00.449 22:05:42 -- nvmf/common.sh@124 -- # set -e 00:14:00.449 22:05:42 -- nvmf/common.sh@125 -- # return 0 00:14:00.449 22:05:42 -- nvmf/common.sh@478 -- # '[' -n 3918705 ']' 00:14:00.449 22:05:42 -- nvmf/common.sh@479 -- # killprocess 3918705 00:14:00.449 22:05:42 -- common/autotest_common.sh@936 -- # '[' -z 3918705 ']' 00:14:00.449 22:05:42 -- common/autotest_common.sh@940 -- # kill -0 3918705 00:14:00.449 22:05:42 -- common/autotest_common.sh@941 -- # uname 00:14:00.449 22:05:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:00.449 22:05:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3918705 00:14:00.449 22:05:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:00.449 22:05:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:00.449 22:05:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3918705' 00:14:00.449 killing process with pid 3918705 00:14:00.449 22:05:42 -- common/autotest_common.sh@955 -- # kill 3918705 00:14:00.449 [2024-04-24 22:05:42.650248] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:00.449 22:05:42 -- common/autotest_common.sh@960 -- # wait 3918705 00:14:01.015 22:05:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:01.015 22:05:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:01.015 22:05:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:01.015 22:05:42 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:01.015 22:05:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:01.015 22:05:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.015 22:05:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.015 22:05:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.910 22:05:45 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:02.910 00:14:02.910 real 0m21.084s 00:14:02.910 user 1m12.530s 00:14:02.910 sys 0m6.136s 00:14:02.910 22:05:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:02.910 22:05:45 -- common/autotest_common.sh@10 -- # set +x 00:14:02.910 ************************************ 00:14:02.910 END TEST nvmf_lvol 00:14:02.910 ************************************ 00:14:02.910 22:05:45 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:02.911 22:05:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:02.911 22:05:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:02.911 22:05:45 -- common/autotest_common.sh@10 -- # set +x 00:14:03.198 ************************************ 00:14:03.198 START TEST nvmf_lvs_grow 00:14:03.198 ************************************ 00:14:03.198 22:05:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:03.198 * Looking for test storage... 00:14:03.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:03.198 22:05:45 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:03.198 22:05:45 -- nvmf/common.sh@7 -- # uname -s 00:14:03.198 22:05:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:03.198 22:05:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:03.198 22:05:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:03.198 22:05:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:03.198 22:05:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:03.198 22:05:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:03.198 22:05:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:03.198 22:05:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:03.198 22:05:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:03.198 22:05:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:03.198 22:05:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:03.198 22:05:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:03.198 22:05:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:03.198 22:05:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:03.198 22:05:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:03.198 22:05:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:03.198 22:05:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:03.198 22:05:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:03.198 22:05:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:03.198 22:05:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:03.198 22:05:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.198 22:05:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.198 22:05:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.198 22:05:45 -- paths/export.sh@5 -- # export PATH 00:14:03.198 22:05:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.198 22:05:45 -- nvmf/common.sh@47 -- # : 0 00:14:03.198 22:05:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:03.198 22:05:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:03.198 22:05:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:03.198 22:05:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:03.198 22:05:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:03.198 22:05:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:03.198 22:05:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:03.198 22:05:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:03.198 22:05:45 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:03.198 22:05:45 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:03.198 22:05:45 -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:03.198 22:05:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:03.198 22:05:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:03.198 22:05:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:03.198 22:05:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:03.198 22:05:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:03.198 22:05:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.198 22:05:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:03.198 22:05:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.198 22:05:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:03.198 22:05:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:03.198 22:05:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:03.198 22:05:45 -- common/autotest_common.sh@10 -- # set +x 00:14:05.725 22:05:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:05.725 22:05:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:05.725 22:05:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:05.725 22:05:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:05.725 22:05:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:05.725 22:05:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:05.725 22:05:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:05.725 22:05:47 -- nvmf/common.sh@295 -- # net_devs=() 00:14:05.725 22:05:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:05.725 22:05:47 -- nvmf/common.sh@296 -- # e810=() 00:14:05.725 22:05:47 -- nvmf/common.sh@296 -- # local -ga e810 00:14:05.725 22:05:47 -- nvmf/common.sh@297 -- # x722=() 00:14:05.725 22:05:47 -- nvmf/common.sh@297 -- # local -ga x722 00:14:05.725 22:05:47 -- nvmf/common.sh@298 -- # mlx=() 00:14:05.725 22:05:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:05.725 22:05:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:05.725 22:05:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:05.725 22:05:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:05.725 22:05:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:05.725 22:05:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:05.725 22:05:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:05.725 22:05:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:05.725 22:05:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:05.725 22:05:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:05.725 22:05:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:05.725 22:05:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:05.725 22:05:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:05.725 22:05:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:05.725 22:05:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:05.725 22:05:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:05.725 22:05:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:05.725 22:05:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:05.725 22:05:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:05.725 22:05:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:05.725 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:05.725 22:05:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:05.725 22:05:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:05.725 22:05:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.725 22:05:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.725 22:05:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:05.725 22:05:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:05.725 22:05:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:05.725 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:05.725 22:05:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:05.725 22:05:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:05.725 22:05:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.725 22:05:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.725 22:05:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:05.725 22:05:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:05.725 22:05:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:05.725 22:05:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:05.725 22:05:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:05.725 22:05:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.725 22:05:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:05.725 22:05:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.725 22:05:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:05.725 Found net devices under 0000:84:00.0: cvl_0_0 00:14:05.725 22:05:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.725 22:05:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:05.725 22:05:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.725 22:05:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:05.725 22:05:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.725 22:05:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:05.725 Found net devices under 0000:84:00.1: cvl_0_1 00:14:05.725 22:05:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.725 22:05:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:05.726 22:05:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:05.726 22:05:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:05.726 22:05:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:05.726 22:05:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:05.726 22:05:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.726 22:05:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:05.726 22:05:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:05.726 22:05:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:05.726 22:05:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:05.726 22:05:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:05.726 22:05:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:05.726 22:05:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:05.726 22:05:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.726 22:05:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:05.726 22:05:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:05.726 22:05:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:05.726 22:05:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:05.726 22:05:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:05.726 22:05:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:05.726 22:05:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:05.726 22:05:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:05.726 22:05:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:05.726 22:05:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:05.726 22:05:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:05.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:05.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:14:05.726 00:14:05.726 --- 10.0.0.2 ping statistics --- 00:14:05.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.726 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:14:05.726 22:05:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:05.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:05.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:14:05.726 00:14:05.726 --- 10.0.0.1 ping statistics --- 00:14:05.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.726 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:14:05.726 22:05:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:05.726 22:05:47 -- nvmf/common.sh@411 -- # return 0 00:14:05.726 22:05:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:05.726 22:05:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:05.726 22:05:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:05.726 22:05:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:05.726 22:05:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:05.726 22:05:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:05.726 22:05:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:05.726 22:05:47 -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:05.726 22:05:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:05.726 22:05:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:05.726 22:05:47 -- common/autotest_common.sh@10 -- # set +x 00:14:05.726 22:05:47 -- nvmf/common.sh@470 -- # nvmfpid=3922552 00:14:05.726 22:05:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:05.726 22:05:47 -- nvmf/common.sh@471 -- # waitforlisten 3922552 00:14:05.726 22:05:47 -- common/autotest_common.sh@817 -- # '[' -z 3922552 ']' 00:14:05.726 22:05:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.726 22:05:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:05.726 22:05:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.726 22:05:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:05.726 22:05:47 -- common/autotest_common.sh@10 -- # set +x 00:14:05.726 [2024-04-24 22:05:47.788850] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:05.726 [2024-04-24 22:05:47.788936] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.726 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.726 [2024-04-24 22:05:47.863917] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.984 [2024-04-24 22:05:47.983403] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.984 [2024-04-24 22:05:47.983459] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.984 [2024-04-24 22:05:47.983474] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.984 [2024-04-24 22:05:47.983488] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.984 [2024-04-24 22:05:47.983500] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.984 [2024-04-24 22:05:47.983537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.984 22:05:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:05.984 22:05:48 -- common/autotest_common.sh@850 -- # return 0 00:14:05.984 22:05:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:05.984 22:05:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:05.984 22:05:48 -- common/autotest_common.sh@10 -- # set +x 00:14:05.984 22:05:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:05.984 22:05:48 -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:06.242 [2024-04-24 22:05:48.388181] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.242 22:05:48 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:06.242 22:05:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:06.242 22:05:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:06.242 22:05:48 -- common/autotest_common.sh@10 -- # set +x 00:14:06.500 ************************************ 00:14:06.500 START TEST lvs_grow_clean 00:14:06.500 ************************************ 00:14:06.500 22:05:48 -- common/autotest_common.sh@1111 -- # lvs_grow 00:14:06.500 22:05:48 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:06.500 22:05:48 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:06.500 22:05:48 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:06.500 22:05:48 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:06.500 22:05:48 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:06.500 22:05:48 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:06.500 22:05:48 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:06.500 22:05:48 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:06.500 22:05:48 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:06.758 22:05:48 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:06.758 22:05:48 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:07.016 22:05:49 -- target/nvmf_lvs_grow.sh@28 -- # lvs=f80c4f5e-d0e0-4906-b704-afbd0b7e8ab1 00:14:07.016 22:05:49 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f80c4f5e-d0e0-4906-b704-afbd0b7e8ab1 00:14:07.016 22:05:49 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:07.274 22:05:49 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:07.274 22:05:49 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:07.274 22:05:49 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f80c4f5e-d0e0-4906-b704-afbd0b7e8ab1 lvol 150 00:14:07.532 22:05:49 -- target/nvmf_lvs_grow.sh@33 -- # lvol=486e0bb7-09f5-4089-9096-b37c89721570 00:14:07.532 22:05:49 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:07.532 22:05:49 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:07.790 [2024-04-24 22:05:49.959972] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:07.790 [2024-04-24 22:05:49.960064] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:07.790 true 00:14:07.790 22:05:49 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f80c4f5e-d0e0-4906-b704-afbd0b7e8ab1 00:14:07.790 22:05:49 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:08.048 22:05:50 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:08.048 22:05:50 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:08.306 22:05:50 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 486e0bb7-09f5-4089-9096-b37c89721570 00:14:08.873 22:05:50 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:09.131 [2024-04-24 22:05:51.207499] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:09.131 [2024-04-24 22:05:51.207849] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:09.131 22:05:51 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:09.389 22:05:51 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3923109 00:14:09.389 22:05:51 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:09.389 22:05:51 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:09.389 22:05:51 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3923109 /var/tmp/bdevperf.sock 00:14:09.389 22:05:51 -- common/autotest_common.sh@817 -- # '[' -z 3923109 ']' 00:14:09.389 22:05:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:09.389 22:05:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:09.389 22:05:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:09.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:09.389 22:05:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:09.389 22:05:51 -- common/autotest_common.sh@10 -- # set +x 00:14:09.389 [2024-04-24 22:05:51.553247] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:09.389 [2024-04-24 22:05:51.553332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3923109 ] 00:14:09.389 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.389 [2024-04-24 22:05:51.621460] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.647 [2024-04-24 22:05:51.742127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.905 22:05:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:09.905 22:05:52 -- common/autotest_common.sh@850 -- # return 0 00:14:09.905 22:05:52 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:10.470 Nvme0n1 00:14:10.470 22:05:52 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:11.035 [ 00:14:11.035 { 00:14:11.035 "name": "Nvme0n1", 00:14:11.035 "aliases": [ 00:14:11.035 "486e0bb7-09f5-4089-9096-b37c89721570" 00:14:11.035 ], 00:14:11.035 "product_name": "NVMe disk", 00:14:11.035 "block_size": 4096, 00:14:11.035 "num_blocks": 38912, 00:14:11.035 "uuid": "486e0bb7-09f5-4089-9096-b37c89721570", 00:14:11.035 "assigned_rate_limits": { 00:14:11.035 "rw_ios_per_sec": 0, 00:14:11.035 "rw_mbytes_per_sec": 0, 00:14:11.035 "r_mbytes_per_sec": 0, 00:14:11.035 "w_mbytes_per_sec": 0 00:14:11.035 }, 00:14:11.035 "claimed": false, 00:14:11.035 "zoned": false, 00:14:11.035 "supported_io_types": { 00:14:11.035 "read": true, 00:14:11.035 "write": true, 00:14:11.035 "unmap": true, 00:14:11.035 "write_zeroes": true, 00:14:11.035 "flush": true, 00:14:11.035 "reset": true, 00:14:11.035 "compare": true, 00:14:11.035 "compare_and_write": true, 00:14:11.035 "abort": true, 00:14:11.035 "nvme_admin": true, 00:14:11.035 "nvme_io": true 00:14:11.035 }, 00:14:11.035 "memory_domains": [ 00:14:11.035 { 00:14:11.035 "dma_device_id": "system", 00:14:11.035 "dma_device_type": 1 00:14:11.035 } 00:14:11.035 ], 00:14:11.035 "driver_specific": { 00:14:11.035 "nvme": [ 00:14:11.035 { 00:14:11.035 "trid": { 00:14:11.035 "trtype": "TCP", 00:14:11.035 "adrfam": "IPv4", 00:14:11.035 "traddr": "10.0.0.2", 00:14:11.035 "trsvcid": "4420", 00:14:11.035 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:11.035 }, 00:14:11.035 "ctrlr_data": { 00:14:11.035 "cntlid": 1, 00:14:11.035 "vendor_id": "0x8086", 00:14:11.035 "model_number": "SPDK bdev Controller", 00:14:11.035 "serial_number": "SPDK0", 00:14:11.035 "firmware_revision": "24.05", 00:14:11.035 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:11.035 "oacs": { 00:14:11.035 "security": 0, 00:14:11.035 "format": 0, 00:14:11.035 "firmware": 0, 00:14:11.035 "ns_manage": 0 00:14:11.035 }, 00:14:11.035 "multi_ctrlr": true, 00:14:11.035 "ana_reporting": false 00:14:11.035 }, 00:14:11.035 "vs": { 00:14:11.035 "nvme_version": "1.3" 00:14:11.035 }, 00:14:11.035 "ns_data": { 00:14:11.035 "id": 1, 00:14:11.035 "can_share": true 00:14:11.035 } 00:14:11.035 } 00:14:11.035 ], 00:14:11.035 "mp_policy": "active_passive" 00:14:11.035 } 00:14:11.035 } 00:14:11.035 ] 00:14:11.035 22:05:53 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3923256 00:14:11.035 22:05:53 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:11.035 22:05:53 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:11.035 Running I/O for 10 seconds... 00:14:12.410 Latency(us) 00:14:12.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.410 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:12.410 Nvme0n1 : 1.00 13789.00 53.86 0.00 0.00 0.00 0.00 0.00 00:14:12.410 =================================================================================================================== 00:14:12.410 Total : 13789.00 53.86 0.00 0.00 0.00 0.00 0.00 00:14:12.410 00:14:12.976 22:05:55 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f80c4f5e-d0e0-4906-b704-afbd0b7e8ab1 00:14:13.236 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:13.236 Nvme0n1 : 2.00 13948.50 54.49 0.00 0.00 0.00 0.00 0.00 00:14:13.236 =================================================================================================================== 00:14:13.236 Total : 13948.50 54.49 0.00 0.00 0.00 0.00 0.00 00:14:13.236 00:14:13.236 true 00:14:13.236 22:05:55 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f80c4f5e-d0e0-4906-b704-afbd0b7e8ab1 00:14:13.236 22:05:55 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:13.495 22:05:55 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:13.495 22:05:55 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:13.495 22:05:55 -- target/nvmf_lvs_grow.sh@65 -- # wait 3923256 00:14:14.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:14.061 Nvme0n1 : 3.00 14077.00 54.99 0.00 0.00 0.00 0.00 0.00 00:14:14.061 =================================================================================================================== 00:14:14.061 Total : 14077.00 54.99 0.00 0.00 0.00 0.00 0.00 00:14:14.061 00:14:15.434 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:15.434 Nvme0n1 : 4.00 14128.50 55.19 0.00 0.00 0.00 0.00 0.00 00:14:15.434 =================================================================================================================== 00:14:15.434 Total : 14128.50 55.19 0.00 0.00 0.00 0.00 0.00 00:14:15.434 00:14:16.368 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:16.368 Nvme0n1 : 5.00 14209.60 55.51 0.00 0.00 0.00 0.00 0.00 00:14:16.368 =================================================================================================================== 00:14:16.368 Total : 14209.60 55.51 0.00 0.00 0.00 0.00 0.00 00:14:16.368 00:14:17.300 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:17.300 Nvme0n1 : 6.00 14236.33 55.61 0.00 0.00 0.00 0.00 0.00 00:14:17.300 =================================================================================================================== 00:14:17.300 Total : 14236.33 55.61 0.00 0.00 0.00 0.00 0.00 00:14:17.300 00:14:18.233 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:18.233 Nvme0n1 : 7.00 14254.86 55.68 0.00 0.00 0.00 0.00 0.00 00:14:18.233 =================================================================================================================== 00:14:18.233 Total : 14254.86 55.68 0.00 0.00 0.00 0.00 0.00 00:14:18.233 00:14:19.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:19.166 Nvme0n1 : 8.00 14288.12 55.81 0.00 0.00 0.00 0.00 0.00 00:14:19.166 =================================================================================================================== 00:14:19.166 Total : 14288.12 55.81 0.00 0.00 0.00 0.00 0.00 00:14:19.166 00:14:20.098 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:20.098 Nvme0n1 : 9.00 14327.00 55.96 0.00 0.00 0.00 0.00 0.00 00:14:20.098 =================================================================================================================== 00:14:20.098 Total : 14327.00 55.96 0.00 0.00 0.00 0.00 0.00 00:14:20.098 00:14:21.035 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:21.035 Nvme0n1 : 10.00 14342.50 56.03 0.00 0.00 0.00 0.00 0.00 00:14:21.035 =================================================================================================================== 00:14:21.035 Total : 14342.50 56.03 0.00 0.00 0.00 0.00 0.00 00:14:21.035 00:14:21.035 00:14:21.035 Latency(us) 00:14:21.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.035 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:21.035 Nvme0n1 : 10.01 14345.10 56.04 0.00 0.00 8917.32 4077.80 17476.27 00:14:21.035 =================================================================================================================== 00:14:21.035 Total : 14345.10 56.04 0.00 0.00 8917.32 4077.80 17476.27 00:14:21.035 0 00:14:21.293 22:06:03 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3923109 00:14:21.293 22:06:03 -- common/autotest_common.sh@936 -- # '[' -z 3923109 ']' 00:14:21.293 22:06:03 -- common/autotest_common.sh@940 -- # kill -0 3923109 00:14:21.293 22:06:03 -- common/autotest_common.sh@941 -- # uname 00:14:21.293 22:06:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:21.293 22:06:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3923109 00:14:21.293 22:06:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:21.293 22:06:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:21.293 22:06:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3923109' 00:14:21.293 killing process with pid 3923109 00:14:21.293 22:06:03 -- common/autotest_common.sh@955 -- # kill 3923109 00:14:21.293 Received shutdown signal, test time was about 10.000000 seconds 00:14:21.293 00:14:21.293 Latency(us) 00:14:21.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.294 =================================================================================================================== 00:14:21.294 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:21.294 22:06:03 -- common/autotest_common.sh@960 -- # wait 3923109 00:14:21.551 22:06:03 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:21.808 22:06:03 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:22.131 22:06:04 -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f80c4f5e-d0e0-4906-b704-afbd0b7e8ab1 00:14:22.131 22:06:04 -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:22.389 22:06:04 -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:22.389 22:06:04 -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:22.389 22:06:04 -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:22.646 [2024-04-24 22:06:04.896401] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:22.903 22:06:04 -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f80c4f5e-d0e0-4906-b704-afbd0b7e8ab1 00:14:22.903 22:06:04 -- common/autotest_common.sh@638 -- # local es=0 00:14:22.903 22:06:04 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f80c4f5e-d0e0-4906-b704-afbd0b7e8ab1 00:14:22.903 22:06:04 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:22.903 22:06:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:22.903 22:06:04 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:22.903 22:06:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:22.903 22:06:04 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:22.903 22:06:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:22.903 22:06:04 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:22.903 22:06:04 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:22.903 22:06:04 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f80c4f5e-d0e0-4906-b704-afbd0b7e8ab1 00:14:23.160 request: 00:14:23.160 { 00:14:23.160 "uuid": "f80c4f5e-d0e0-4906-b704-afbd0b7e8ab1", 00:14:23.160 "method": "bdev_lvol_get_lvstores", 00:14:23.160 "req_id": 1 00:14:23.160 } 00:14:23.160 Got JSON-RPC error response 00:14:23.160 response: 00:14:23.160 { 00:14:23.160 "code": -19, 00:14:23.160 "message": "No such device" 00:14:23.160 } 00:14:23.160 22:06:05 -- common/autotest_common.sh@641 -- # es=1 00:14:23.160 22:06:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:23.160 22:06:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:23.160 22:06:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:23.160 22:06:05 -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:23.417 aio_bdev 00:14:23.417 22:06:05 -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 486e0bb7-09f5-4089-9096-b37c89721570 00:14:23.417 22:06:05 -- common/autotest_common.sh@885 -- # local bdev_name=486e0bb7-09f5-4089-9096-b37c89721570 00:14:23.417 22:06:05 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:23.417 22:06:05 -- common/autotest_common.sh@887 -- # local i 00:14:23.417 22:06:05 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:23.417 22:06:05 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:23.417 22:06:05 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:23.674 22:06:05 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 486e0bb7-09f5-4089-9096-b37c89721570 -t 2000 00:14:23.931 [ 00:14:23.932 { 00:14:23.932 "name": "486e0bb7-09f5-4089-9096-b37c89721570", 00:14:23.932 "aliases": [ 00:14:23.932 "lvs/lvol" 00:14:23.932 ], 00:14:23.932 "product_name": "Logical Volume", 00:14:23.932 "block_size": 4096, 00:14:23.932 "num_blocks": 38912, 00:14:23.932 "uuid": "486e0bb7-09f5-4089-9096-b37c89721570", 00:14:23.932 "assigned_rate_limits": { 00:14:23.932 "rw_ios_per_sec": 0, 00:14:23.932 "rw_mbytes_per_sec": 0, 00:14:23.932 "r_mbytes_per_sec": 0, 00:14:23.932 "w_mbytes_per_sec": 0 00:14:23.932 }, 00:14:23.932 "claimed": false, 00:14:23.932 "zoned": false, 00:14:23.932 "supported_io_types": { 00:14:23.932 "read": true, 00:14:23.932 "write": true, 00:14:23.932 "unmap": true, 00:14:23.932 "write_zeroes": true, 00:14:23.932 "flush": false, 00:14:23.932 "reset": true, 00:14:23.932 "compare": false, 00:14:23.932 "compare_and_write": false, 00:14:23.932 "abort": false, 00:14:23.932 "nvme_admin": false, 00:14:23.932 "nvme_io": false 00:14:23.932 }, 00:14:23.932 "driver_specific": { 00:14:23.932 "lvol": { 00:14:23.932 "lvol_store_uuid": "f80c4f5e-d0e0-4906-b704-afbd0b7e8ab1", 00:14:23.932 "base_bdev": "aio_bdev", 00:14:23.932 "thin_provision": false, 00:14:23.932 "snapshot": false, 00:14:23.932 "clone": false, 00:14:23.932 "esnap_clone": false 00:14:23.932 } 00:14:23.932 } 00:14:23.932 } 00:14:23.932 ] 00:14:23.932 22:06:06 -- common/autotest_common.sh@893 -- # return 0 00:14:23.932 22:06:06 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f80c4f5e-d0e0-4906-b704-afbd0b7e8ab1 00:14:23.932 22:06:06 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:24.496 22:06:06 -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:24.496 22:06:06 -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f80c4f5e-d0e0-4906-b704-afbd0b7e8ab1 00:14:24.497 22:06:06 -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:24.754 22:06:06 -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:24.754 22:06:06 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 486e0bb7-09f5-4089-9096-b37c89721570 00:14:25.012 22:06:07 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f80c4f5e-d0e0-4906-b704-afbd0b7e8ab1 00:14:25.578 22:06:07 -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:25.578 22:06:07 -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:25.836 00:14:25.836 real 0m19.331s 00:14:25.836 user 0m19.222s 00:14:25.836 sys 0m2.210s 00:14:25.836 22:06:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:25.836 22:06:07 -- common/autotest_common.sh@10 -- # set +x 00:14:25.836 ************************************ 00:14:25.836 END TEST lvs_grow_clean 00:14:25.836 ************************************ 00:14:25.836 22:06:07 -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:25.836 22:06:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:25.836 22:06:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:25.836 22:06:07 -- common/autotest_common.sh@10 -- # set +x 00:14:25.836 ************************************ 00:14:25.836 START TEST lvs_grow_dirty 00:14:25.836 ************************************ 00:14:25.836 22:06:07 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:14:25.836 22:06:07 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:25.836 22:06:07 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:25.836 22:06:07 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:25.836 22:06:07 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:25.836 22:06:07 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:25.836 22:06:07 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:25.836 22:06:07 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:25.836 22:06:07 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:25.836 22:06:07 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:26.402 22:06:08 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:26.402 22:06:08 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:26.968 22:06:08 -- target/nvmf_lvs_grow.sh@28 -- # lvs=634703d6-e77e-4be1-83b1-1845cd23000a 00:14:26.968 22:06:09 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 634703d6-e77e-4be1-83b1-1845cd23000a 00:14:26.968 22:06:09 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:27.226 22:06:09 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:27.226 22:06:09 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:27.226 22:06:09 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 634703d6-e77e-4be1-83b1-1845cd23000a lvol 150 00:14:27.791 22:06:09 -- target/nvmf_lvs_grow.sh@33 -- # lvol=166f3d7f-0b5e-4cad-9c40-eb69141a3710 00:14:27.792 22:06:09 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:27.792 22:06:09 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:28.050 [2024-04-24 22:06:10.088221] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:28.050 [2024-04-24 22:06:10.088315] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:28.050 true 00:14:28.050 22:06:10 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 634703d6-e77e-4be1-83b1-1845cd23000a 00:14:28.050 22:06:10 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:28.309 22:06:10 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:28.309 22:06:10 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:28.566 22:06:10 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 166f3d7f-0b5e-4cad-9c40-eb69141a3710 00:14:29.132 22:06:11 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:29.390 [2024-04-24 22:06:11.412201] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.390 22:06:11 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:29.648 22:06:11 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3926054 00:14:29.648 22:06:11 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:29.648 22:06:11 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:29.648 22:06:11 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3926054 /var/tmp/bdevperf.sock 00:14:29.648 22:06:11 -- common/autotest_common.sh@817 -- # '[' -z 3926054 ']' 00:14:29.649 22:06:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:29.649 22:06:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:29.649 22:06:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:29.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:29.649 22:06:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:29.649 22:06:11 -- common/autotest_common.sh@10 -- # set +x 00:14:29.649 [2024-04-24 22:06:11.757383] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:29.649 [2024-04-24 22:06:11.757476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3926054 ] 00:14:29.649 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.649 [2024-04-24 22:06:11.826200] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.907 [2024-04-24 22:06:11.950227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.165 22:06:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:30.165 22:06:12 -- common/autotest_common.sh@850 -- # return 0 00:14:30.165 22:06:12 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:30.731 Nvme0n1 00:14:30.731 22:06:12 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:31.297 [ 00:14:31.297 { 00:14:31.297 "name": "Nvme0n1", 00:14:31.297 "aliases": [ 00:14:31.297 "166f3d7f-0b5e-4cad-9c40-eb69141a3710" 00:14:31.297 ], 00:14:31.297 "product_name": "NVMe disk", 00:14:31.297 "block_size": 4096, 00:14:31.297 "num_blocks": 38912, 00:14:31.297 "uuid": "166f3d7f-0b5e-4cad-9c40-eb69141a3710", 00:14:31.297 "assigned_rate_limits": { 00:14:31.297 "rw_ios_per_sec": 0, 00:14:31.297 "rw_mbytes_per_sec": 0, 00:14:31.297 "r_mbytes_per_sec": 0, 00:14:31.297 "w_mbytes_per_sec": 0 00:14:31.297 }, 00:14:31.297 "claimed": false, 00:14:31.297 "zoned": false, 00:14:31.297 "supported_io_types": { 00:14:31.297 "read": true, 00:14:31.297 "write": true, 00:14:31.297 "unmap": true, 00:14:31.297 "write_zeroes": true, 00:14:31.297 "flush": true, 00:14:31.297 "reset": true, 00:14:31.297 "compare": true, 00:14:31.297 "compare_and_write": true, 00:14:31.297 "abort": true, 00:14:31.297 "nvme_admin": true, 00:14:31.297 "nvme_io": true 00:14:31.297 }, 00:14:31.297 "memory_domains": [ 00:14:31.297 { 00:14:31.297 "dma_device_id": "system", 00:14:31.297 "dma_device_type": 1 00:14:31.297 } 00:14:31.297 ], 00:14:31.297 "driver_specific": { 00:14:31.297 "nvme": [ 00:14:31.297 { 00:14:31.297 "trid": { 00:14:31.297 "trtype": "TCP", 00:14:31.297 "adrfam": "IPv4", 00:14:31.297 "traddr": "10.0.0.2", 00:14:31.297 "trsvcid": "4420", 00:14:31.297 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:31.297 }, 00:14:31.297 "ctrlr_data": { 00:14:31.297 "cntlid": 1, 00:14:31.297 "vendor_id": "0x8086", 00:14:31.297 "model_number": "SPDK bdev Controller", 00:14:31.297 "serial_number": "SPDK0", 00:14:31.297 "firmware_revision": "24.05", 00:14:31.297 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:31.297 "oacs": { 00:14:31.297 "security": 0, 00:14:31.297 "format": 0, 00:14:31.297 "firmware": 0, 00:14:31.297 "ns_manage": 0 00:14:31.297 }, 00:14:31.297 "multi_ctrlr": true, 00:14:31.297 "ana_reporting": false 00:14:31.297 }, 00:14:31.297 "vs": { 00:14:31.298 "nvme_version": "1.3" 00:14:31.298 }, 00:14:31.298 "ns_data": { 00:14:31.298 "id": 1, 00:14:31.298 "can_share": true 00:14:31.298 } 00:14:31.298 } 00:14:31.298 ], 00:14:31.298 "mp_policy": "active_passive" 00:14:31.298 } 00:14:31.298 } 00:14:31.298 ] 00:14:31.298 22:06:13 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3926315 00:14:31.298 22:06:13 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:31.298 22:06:13 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:31.298 Running I/O for 10 seconds... 00:14:32.232 Latency(us) 00:14:32.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.232 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:32.232 Nvme0n1 : 1.00 13909.00 54.33 0.00 0.00 0.00 0.00 0.00 00:14:32.232 =================================================================================================================== 00:14:32.232 Total : 13909.00 54.33 0.00 0.00 0.00 0.00 0.00 00:14:32.232 00:14:33.167 22:06:15 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 634703d6-e77e-4be1-83b1-1845cd23000a 00:14:33.425 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.425 Nvme0n1 : 2.00 14025.00 54.79 0.00 0.00 0.00 0.00 0.00 00:14:33.425 =================================================================================================================== 00:14:33.425 Total : 14025.00 54.79 0.00 0.00 0.00 0.00 0.00 00:14:33.425 00:14:33.425 true 00:14:33.425 22:06:15 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 634703d6-e77e-4be1-83b1-1845cd23000a 00:14:33.425 22:06:15 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:33.992 22:06:16 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:33.992 22:06:16 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:33.992 22:06:16 -- target/nvmf_lvs_grow.sh@65 -- # wait 3926315 00:14:34.249 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.249 Nvme0n1 : 3.00 14117.00 55.14 0.00 0.00 0.00 0.00 0.00 00:14:34.249 =================================================================================================================== 00:14:34.249 Total : 14117.00 55.14 0.00 0.00 0.00 0.00 0.00 00:14:34.249 00:14:35.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.639 Nvme0n1 : 4.00 14195.50 55.45 0.00 0.00 0.00 0.00 0.00 00:14:35.639 =================================================================================================================== 00:14:35.639 Total : 14195.50 55.45 0.00 0.00 0.00 0.00 0.00 00:14:35.639 00:14:36.574 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:36.574 Nvme0n1 : 5.00 14256.00 55.69 0.00 0.00 0.00 0.00 0.00 00:14:36.574 =================================================================================================================== 00:14:36.574 Total : 14256.00 55.69 0.00 0.00 0.00 0.00 0.00 00:14:36.574 00:14:37.509 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.509 Nvme0n1 : 6.00 14289.67 55.82 0.00 0.00 0.00 0.00 0.00 00:14:37.509 =================================================================================================================== 00:14:37.509 Total : 14289.67 55.82 0.00 0.00 0.00 0.00 0.00 00:14:37.509 00:14:38.444 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.444 Nvme0n1 : 7.00 14311.43 55.90 0.00 0.00 0.00 0.00 0.00 00:14:38.444 =================================================================================================================== 00:14:38.444 Total : 14311.43 55.90 0.00 0.00 0.00 0.00 0.00 00:14:38.444 00:14:39.378 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.379 Nvme0n1 : 8.00 14351.38 56.06 0.00 0.00 0.00 0.00 0.00 00:14:39.379 =================================================================================================================== 00:14:39.379 Total : 14351.38 56.06 0.00 0.00 0.00 0.00 0.00 00:14:39.379 00:14:40.341 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.341 Nvme0n1 : 9.00 14376.44 56.16 0.00 0.00 0.00 0.00 0.00 00:14:40.341 =================================================================================================================== 00:14:40.341 Total : 14376.44 56.16 0.00 0.00 0.00 0.00 0.00 00:14:40.341 00:14:41.275 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.276 Nvme0n1 : 10.00 14395.30 56.23 0.00 0.00 0.00 0.00 0.00 00:14:41.276 =================================================================================================================== 00:14:41.276 Total : 14395.30 56.23 0.00 0.00 0.00 0.00 0.00 00:14:41.276 00:14:41.276 00:14:41.276 Latency(us) 00:14:41.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.276 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.276 Nvme0n1 : 10.01 14393.11 56.22 0.00 0.00 8887.07 5097.24 17185.00 00:14:41.276 =================================================================================================================== 00:14:41.276 Total : 14393.11 56.22 0.00 0.00 8887.07 5097.24 17185.00 00:14:41.276 0 00:14:41.276 22:06:23 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3926054 00:14:41.276 22:06:23 -- common/autotest_common.sh@936 -- # '[' -z 3926054 ']' 00:14:41.276 22:06:23 -- common/autotest_common.sh@940 -- # kill -0 3926054 00:14:41.276 22:06:23 -- common/autotest_common.sh@941 -- # uname 00:14:41.276 22:06:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:41.276 22:06:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3926054 00:14:41.533 22:06:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:41.533 22:06:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:41.533 22:06:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3926054' 00:14:41.533 killing process with pid 3926054 00:14:41.533 22:06:23 -- common/autotest_common.sh@955 -- # kill 3926054 00:14:41.533 Received shutdown signal, test time was about 10.000000 seconds 00:14:41.533 00:14:41.533 Latency(us) 00:14:41.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.533 =================================================================================================================== 00:14:41.533 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:41.533 22:06:23 -- common/autotest_common.sh@960 -- # wait 3926054 00:14:41.791 22:06:23 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:42.049 22:06:24 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:42.615 22:06:24 -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 634703d6-e77e-4be1-83b1-1845cd23000a 00:14:42.615 22:06:24 -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:42.615 22:06:24 -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:42.615 22:06:24 -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:42.615 22:06:24 -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3922552 00:14:42.615 22:06:24 -- target/nvmf_lvs_grow.sh@75 -- # wait 3922552 00:14:42.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3922552 Killed "${NVMF_APP[@]}" "$@" 00:14:42.873 22:06:24 -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:42.873 22:06:24 -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:42.873 22:06:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:42.873 22:06:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:42.873 22:06:24 -- common/autotest_common.sh@10 -- # set +x 00:14:42.873 22:06:24 -- nvmf/common.sh@470 -- # nvmfpid=3927644 00:14:42.873 22:06:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:42.873 22:06:24 -- nvmf/common.sh@471 -- # waitforlisten 3927644 00:14:42.873 22:06:24 -- common/autotest_common.sh@817 -- # '[' -z 3927644 ']' 00:14:42.873 22:06:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.873 22:06:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:42.873 22:06:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.873 22:06:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:42.873 22:06:24 -- common/autotest_common.sh@10 -- # set +x 00:14:42.873 [2024-04-24 22:06:24.947031] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:42.873 [2024-04-24 22:06:24.947130] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.873 EAL: No free 2048 kB hugepages reported on node 1 00:14:42.873 [2024-04-24 22:06:25.025343] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.131 [2024-04-24 22:06:25.148725] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.131 [2024-04-24 22:06:25.148786] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.131 [2024-04-24 22:06:25.148803] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.131 [2024-04-24 22:06:25.148817] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.131 [2024-04-24 22:06:25.148829] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.131 [2024-04-24 22:06:25.148871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.131 22:06:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:43.131 22:06:25 -- common/autotest_common.sh@850 -- # return 0 00:14:43.131 22:06:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:43.131 22:06:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:43.131 22:06:25 -- common/autotest_common.sh@10 -- # set +x 00:14:43.131 22:06:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.131 22:06:25 -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:43.389 [2024-04-24 22:06:25.568130] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:43.389 [2024-04-24 22:06:25.568272] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:43.389 [2024-04-24 22:06:25.568330] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:43.389 22:06:25 -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:43.389 22:06:25 -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 166f3d7f-0b5e-4cad-9c40-eb69141a3710 00:14:43.389 22:06:25 -- common/autotest_common.sh@885 -- # local bdev_name=166f3d7f-0b5e-4cad-9c40-eb69141a3710 00:14:43.389 22:06:25 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:43.389 22:06:25 -- common/autotest_common.sh@887 -- # local i 00:14:43.389 22:06:25 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:43.389 22:06:25 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:43.389 22:06:25 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:43.648 22:06:25 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 166f3d7f-0b5e-4cad-9c40-eb69141a3710 -t 2000 00:14:43.906 [ 00:14:43.906 { 00:14:43.906 "name": "166f3d7f-0b5e-4cad-9c40-eb69141a3710", 00:14:43.906 "aliases": [ 00:14:43.906 "lvs/lvol" 00:14:43.906 ], 00:14:43.906 "product_name": "Logical Volume", 00:14:43.906 "block_size": 4096, 00:14:43.906 "num_blocks": 38912, 00:14:43.906 "uuid": "166f3d7f-0b5e-4cad-9c40-eb69141a3710", 00:14:43.906 "assigned_rate_limits": { 00:14:43.906 "rw_ios_per_sec": 0, 00:14:43.906 "rw_mbytes_per_sec": 0, 00:14:43.906 "r_mbytes_per_sec": 0, 00:14:43.906 "w_mbytes_per_sec": 0 00:14:43.906 }, 00:14:43.906 "claimed": false, 00:14:43.906 "zoned": false, 00:14:43.906 "supported_io_types": { 00:14:43.906 "read": true, 00:14:43.906 "write": true, 00:14:43.906 "unmap": true, 00:14:43.906 "write_zeroes": true, 00:14:43.906 "flush": false, 00:14:43.906 "reset": true, 00:14:43.906 "compare": false, 00:14:43.906 "compare_and_write": false, 00:14:43.906 "abort": false, 00:14:43.906 "nvme_admin": false, 00:14:43.906 "nvme_io": false 00:14:43.906 }, 00:14:43.906 "driver_specific": { 00:14:43.906 "lvol": { 00:14:43.906 "lvol_store_uuid": "634703d6-e77e-4be1-83b1-1845cd23000a", 00:14:43.906 "base_bdev": "aio_bdev", 00:14:43.906 "thin_provision": false, 00:14:43.906 "snapshot": false, 00:14:43.906 "clone": false, 00:14:43.906 "esnap_clone": false 00:14:43.906 } 00:14:43.906 } 00:14:43.906 } 00:14:43.906 ] 00:14:44.163 22:06:26 -- common/autotest_common.sh@893 -- # return 0 00:14:44.163 22:06:26 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 634703d6-e77e-4be1-83b1-1845cd23000a 00:14:44.163 22:06:26 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:44.730 22:06:26 -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:44.730 22:06:26 -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 634703d6-e77e-4be1-83b1-1845cd23000a 00:14:44.730 22:06:26 -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:44.730 22:06:26 -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:44.730 22:06:26 -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:44.988 [2024-04-24 22:06:27.238159] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:45.246 22:06:27 -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 634703d6-e77e-4be1-83b1-1845cd23000a 00:14:45.246 22:06:27 -- common/autotest_common.sh@638 -- # local es=0 00:14:45.246 22:06:27 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 634703d6-e77e-4be1-83b1-1845cd23000a 00:14:45.246 22:06:27 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:45.246 22:06:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:45.246 22:06:27 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:45.246 22:06:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:45.246 22:06:27 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:45.246 22:06:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:45.246 22:06:27 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:45.246 22:06:27 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:45.246 22:06:27 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 634703d6-e77e-4be1-83b1-1845cd23000a 00:14:45.505 request: 00:14:45.505 { 00:14:45.505 "uuid": "634703d6-e77e-4be1-83b1-1845cd23000a", 00:14:45.505 "method": "bdev_lvol_get_lvstores", 00:14:45.505 "req_id": 1 00:14:45.505 } 00:14:45.505 Got JSON-RPC error response 00:14:45.505 response: 00:14:45.505 { 00:14:45.505 "code": -19, 00:14:45.505 "message": "No such device" 00:14:45.505 } 00:14:45.505 22:06:27 -- common/autotest_common.sh@641 -- # es=1 00:14:45.505 22:06:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:45.505 22:06:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:45.505 22:06:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:45.505 22:06:27 -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:45.763 aio_bdev 00:14:45.763 22:06:27 -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 166f3d7f-0b5e-4cad-9c40-eb69141a3710 00:14:45.763 22:06:27 -- common/autotest_common.sh@885 -- # local bdev_name=166f3d7f-0b5e-4cad-9c40-eb69141a3710 00:14:45.763 22:06:27 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:45.763 22:06:27 -- common/autotest_common.sh@887 -- # local i 00:14:45.763 22:06:27 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:45.763 22:06:27 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:45.763 22:06:27 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:46.021 22:06:28 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 166f3d7f-0b5e-4cad-9c40-eb69141a3710 -t 2000 00:14:46.279 [ 00:14:46.279 { 00:14:46.279 "name": "166f3d7f-0b5e-4cad-9c40-eb69141a3710", 00:14:46.279 "aliases": [ 00:14:46.279 "lvs/lvol" 00:14:46.279 ], 00:14:46.279 "product_name": "Logical Volume", 00:14:46.279 "block_size": 4096, 00:14:46.279 "num_blocks": 38912, 00:14:46.279 "uuid": "166f3d7f-0b5e-4cad-9c40-eb69141a3710", 00:14:46.279 "assigned_rate_limits": { 00:14:46.279 "rw_ios_per_sec": 0, 00:14:46.279 "rw_mbytes_per_sec": 0, 00:14:46.279 "r_mbytes_per_sec": 0, 00:14:46.279 "w_mbytes_per_sec": 0 00:14:46.279 }, 00:14:46.279 "claimed": false, 00:14:46.279 "zoned": false, 00:14:46.279 "supported_io_types": { 00:14:46.279 "read": true, 00:14:46.279 "write": true, 00:14:46.279 "unmap": true, 00:14:46.279 "write_zeroes": true, 00:14:46.279 "flush": false, 00:14:46.279 "reset": true, 00:14:46.279 "compare": false, 00:14:46.279 "compare_and_write": false, 00:14:46.279 "abort": false, 00:14:46.279 "nvme_admin": false, 00:14:46.279 "nvme_io": false 00:14:46.279 }, 00:14:46.279 "driver_specific": { 00:14:46.279 "lvol": { 00:14:46.279 "lvol_store_uuid": "634703d6-e77e-4be1-83b1-1845cd23000a", 00:14:46.279 "base_bdev": "aio_bdev", 00:14:46.279 "thin_provision": false, 00:14:46.279 "snapshot": false, 00:14:46.279 "clone": false, 00:14:46.279 "esnap_clone": false 00:14:46.279 } 00:14:46.279 } 00:14:46.279 } 00:14:46.279 ] 00:14:46.279 22:06:28 -- common/autotest_common.sh@893 -- # return 0 00:14:46.279 22:06:28 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 634703d6-e77e-4be1-83b1-1845cd23000a 00:14:46.279 22:06:28 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:46.536 22:06:28 -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:46.536 22:06:28 -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 634703d6-e77e-4be1-83b1-1845cd23000a 00:14:46.536 22:06:28 -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:46.795 22:06:29 -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:46.795 22:06:29 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 166f3d7f-0b5e-4cad-9c40-eb69141a3710 00:14:47.360 22:06:29 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 634703d6-e77e-4be1-83b1-1845cd23000a 00:14:47.617 22:06:29 -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:47.876 22:06:30 -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:47.876 00:14:47.876 real 0m22.072s 00:14:47.876 user 0m55.506s 00:14:47.876 sys 0m5.519s 00:14:47.876 22:06:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:47.876 22:06:30 -- common/autotest_common.sh@10 -- # set +x 00:14:47.876 ************************************ 00:14:47.876 END TEST lvs_grow_dirty 00:14:47.876 ************************************ 00:14:47.876 22:06:30 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:47.876 22:06:30 -- common/autotest_common.sh@794 -- # type=--id 00:14:47.876 22:06:30 -- common/autotest_common.sh@795 -- # id=0 00:14:47.876 22:06:30 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:14:47.876 22:06:30 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:47.876 22:06:30 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:14:47.876 22:06:30 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:14:47.876 22:06:30 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:14:47.876 22:06:30 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:47.876 nvmf_trace.0 00:14:47.876 22:06:30 -- common/autotest_common.sh@809 -- # return 0 00:14:47.876 22:06:30 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:47.876 22:06:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:47.876 22:06:30 -- nvmf/common.sh@117 -- # sync 00:14:47.876 22:06:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:47.876 22:06:30 -- nvmf/common.sh@120 -- # set +e 00:14:47.876 22:06:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:47.876 22:06:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:47.876 rmmod nvme_tcp 00:14:47.876 rmmod nvme_fabrics 00:14:48.132 rmmod nvme_keyring 00:14:48.132 22:06:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:48.132 22:06:30 -- nvmf/common.sh@124 -- # set -e 00:14:48.132 22:06:30 -- nvmf/common.sh@125 -- # return 0 00:14:48.132 22:06:30 -- nvmf/common.sh@478 -- # '[' -n 3927644 ']' 00:14:48.132 22:06:30 -- nvmf/common.sh@479 -- # killprocess 3927644 00:14:48.132 22:06:30 -- common/autotest_common.sh@936 -- # '[' -z 3927644 ']' 00:14:48.132 22:06:30 -- common/autotest_common.sh@940 -- # kill -0 3927644 00:14:48.132 22:06:30 -- common/autotest_common.sh@941 -- # uname 00:14:48.132 22:06:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:48.132 22:06:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3927644 00:14:48.132 22:06:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:48.132 22:06:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:48.132 22:06:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3927644' 00:14:48.132 killing process with pid 3927644 00:14:48.132 22:06:30 -- common/autotest_common.sh@955 -- # kill 3927644 00:14:48.132 22:06:30 -- common/autotest_common.sh@960 -- # wait 3927644 00:14:48.387 22:06:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:48.387 22:06:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:48.387 22:06:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:48.387 22:06:30 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:48.387 22:06:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:48.387 22:06:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.387 22:06:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:48.387 22:06:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.284 22:06:32 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:50.284 00:14:50.284 real 0m47.314s 00:14:50.284 user 1m21.436s 00:14:50.284 sys 0m10.054s 00:14:50.284 22:06:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:50.284 22:06:32 -- common/autotest_common.sh@10 -- # set +x 00:14:50.284 ************************************ 00:14:50.284 END TEST nvmf_lvs_grow 00:14:50.284 ************************************ 00:14:50.543 22:06:32 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:50.543 22:06:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:50.543 22:06:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:50.543 22:06:32 -- common/autotest_common.sh@10 -- # set +x 00:14:50.543 ************************************ 00:14:50.543 START TEST nvmf_bdev_io_wait 00:14:50.543 ************************************ 00:14:50.543 22:06:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:50.543 * Looking for test storage... 00:14:50.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:50.543 22:06:32 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:50.543 22:06:32 -- nvmf/common.sh@7 -- # uname -s 00:14:50.543 22:06:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.543 22:06:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.543 22:06:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.543 22:06:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.543 22:06:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.543 22:06:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.543 22:06:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.543 22:06:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.543 22:06:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.543 22:06:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.543 22:06:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:50.543 22:06:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:50.543 22:06:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.543 22:06:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.543 22:06:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:50.543 22:06:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.543 22:06:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:50.543 22:06:32 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.543 22:06:32 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.543 22:06:32 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.543 22:06:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.543 22:06:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.543 22:06:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.543 22:06:32 -- paths/export.sh@5 -- # export PATH 00:14:50.543 22:06:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.543 22:06:32 -- nvmf/common.sh@47 -- # : 0 00:14:50.543 22:06:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:50.543 22:06:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:50.543 22:06:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.543 22:06:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.543 22:06:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.543 22:06:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:50.543 22:06:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:50.543 22:06:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:50.543 22:06:32 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:50.543 22:06:32 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:50.543 22:06:32 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:50.543 22:06:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:50.543 22:06:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.543 22:06:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:50.543 22:06:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:50.544 22:06:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:50.544 22:06:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.544 22:06:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:50.544 22:06:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.544 22:06:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:50.544 22:06:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:50.544 22:06:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:50.544 22:06:32 -- common/autotest_common.sh@10 -- # set +x 00:14:53.077 22:06:35 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:53.077 22:06:35 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:53.077 22:06:35 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:53.077 22:06:35 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:53.077 22:06:35 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:53.077 22:06:35 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:53.077 22:06:35 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:53.077 22:06:35 -- nvmf/common.sh@295 -- # net_devs=() 00:14:53.077 22:06:35 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:53.077 22:06:35 -- nvmf/common.sh@296 -- # e810=() 00:14:53.077 22:06:35 -- nvmf/common.sh@296 -- # local -ga e810 00:14:53.077 22:06:35 -- nvmf/common.sh@297 -- # x722=() 00:14:53.077 22:06:35 -- nvmf/common.sh@297 -- # local -ga x722 00:14:53.077 22:06:35 -- nvmf/common.sh@298 -- # mlx=() 00:14:53.077 22:06:35 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:53.078 22:06:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:53.078 22:06:35 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:53.078 22:06:35 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:53.078 22:06:35 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:53.078 22:06:35 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:53.078 22:06:35 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:53.078 22:06:35 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:53.078 22:06:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:53.078 22:06:35 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:53.078 22:06:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:53.078 22:06:35 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:53.078 22:06:35 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:53.078 22:06:35 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:53.078 22:06:35 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:53.078 22:06:35 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:53.078 22:06:35 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:53.078 22:06:35 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:53.078 22:06:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:53.078 22:06:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:53.078 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:53.078 22:06:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:53.078 22:06:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:53.078 22:06:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:53.078 22:06:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:53.078 22:06:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:53.078 22:06:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:53.078 22:06:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:53.078 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:53.078 22:06:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:53.078 22:06:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:53.078 22:06:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:53.078 22:06:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:53.078 22:06:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:53.078 22:06:35 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:53.078 22:06:35 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:53.078 22:06:35 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:53.078 22:06:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:53.078 22:06:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.078 22:06:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:53.078 22:06:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:53.078 22:06:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:53.078 Found net devices under 0000:84:00.0: cvl_0_0 00:14:53.078 22:06:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:53.078 22:06:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:53.078 22:06:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.078 22:06:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:53.078 22:06:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:53.078 22:06:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:53.078 Found net devices under 0000:84:00.1: cvl_0_1 00:14:53.078 22:06:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:53.078 22:06:35 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:53.078 22:06:35 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:53.078 22:06:35 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:53.078 22:06:35 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:53.078 22:06:35 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:53.078 22:06:35 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:53.078 22:06:35 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:53.078 22:06:35 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:53.078 22:06:35 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:53.078 22:06:35 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:53.078 22:06:35 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:53.078 22:06:35 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:53.078 22:06:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:53.078 22:06:35 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:53.078 22:06:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:53.078 22:06:35 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:53.078 22:06:35 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:53.078 22:06:35 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:53.078 22:06:35 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:53.078 22:06:35 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:53.078 22:06:35 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:53.078 22:06:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:53.078 22:06:35 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:53.078 22:06:35 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:53.078 22:06:35 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:53.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:53.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:14:53.078 00:14:53.078 --- 10.0.0.2 ping statistics --- 00:14:53.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.078 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:14:53.078 22:06:35 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:53.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:53.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:14:53.078 00:14:53.078 --- 10.0.0.1 ping statistics --- 00:14:53.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.078 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:14:53.078 22:06:35 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:53.078 22:06:35 -- nvmf/common.sh@411 -- # return 0 00:14:53.078 22:06:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:53.078 22:06:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:53.078 22:06:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:53.078 22:06:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:53.078 22:06:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:53.078 22:06:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:53.078 22:06:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:53.078 22:06:35 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:53.078 22:06:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:53.078 22:06:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:53.078 22:06:35 -- common/autotest_common.sh@10 -- # set +x 00:14:53.078 22:06:35 -- nvmf/common.sh@470 -- # nvmfpid=3930324 00:14:53.078 22:06:35 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:53.078 22:06:35 -- nvmf/common.sh@471 -- # waitforlisten 3930324 00:14:53.078 22:06:35 -- common/autotest_common.sh@817 -- # '[' -z 3930324 ']' 00:14:53.078 22:06:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.078 22:06:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:53.078 22:06:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.078 22:06:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:53.078 22:06:35 -- common/autotest_common.sh@10 -- # set +x 00:14:53.078 [2024-04-24 22:06:35.290057] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:53.078 [2024-04-24 22:06:35.290148] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.078 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.337 [2024-04-24 22:06:35.367142] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:53.337 [2024-04-24 22:06:35.492825] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.337 [2024-04-24 22:06:35.492891] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.337 [2024-04-24 22:06:35.492907] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.337 [2024-04-24 22:06:35.492921] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.337 [2024-04-24 22:06:35.492933] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.337 [2024-04-24 22:06:35.493060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.337 [2024-04-24 22:06:35.493134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.337 [2024-04-24 22:06:35.493197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:53.337 [2024-04-24 22:06:35.493200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.337 22:06:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:53.337 22:06:35 -- common/autotest_common.sh@850 -- # return 0 00:14:53.337 22:06:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:53.337 22:06:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:53.337 22:06:35 -- common/autotest_common.sh@10 -- # set +x 00:14:53.337 22:06:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.337 22:06:35 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:53.337 22:06:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:53.337 22:06:35 -- common/autotest_common.sh@10 -- # set +x 00:14:53.337 22:06:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:53.337 22:06:35 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:53.337 22:06:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:53.337 22:06:35 -- common/autotest_common.sh@10 -- # set +x 00:14:53.596 22:06:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:53.596 22:06:35 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:53.596 22:06:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:53.596 22:06:35 -- common/autotest_common.sh@10 -- # set +x 00:14:53.596 [2024-04-24 22:06:35.644728] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:53.596 22:06:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:53.596 22:06:35 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:53.596 22:06:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:53.596 22:06:35 -- common/autotest_common.sh@10 -- # set +x 00:14:53.596 Malloc0 00:14:53.596 22:06:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:53.596 22:06:35 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:53.596 22:06:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:53.596 22:06:35 -- common/autotest_common.sh@10 -- # set +x 00:14:53.596 22:06:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:53.596 22:06:35 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:53.596 22:06:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:53.596 22:06:35 -- common/autotest_common.sh@10 -- # set +x 00:14:53.596 22:06:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:53.596 22:06:35 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:53.596 22:06:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:53.596 22:06:35 -- common/autotest_common.sh@10 -- # set +x 00:14:53.596 [2024-04-24 22:06:35.713891] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:53.596 [2024-04-24 22:06:35.714234] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:53.596 22:06:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:53.596 22:06:35 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3930356 00:14:53.596 22:06:35 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:53.596 22:06:35 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:53.596 22:06:35 -- target/bdev_io_wait.sh@30 -- # READ_PID=3930358 00:14:53.596 22:06:35 -- nvmf/common.sh@521 -- # config=() 00:14:53.596 22:06:35 -- nvmf/common.sh@521 -- # local subsystem config 00:14:53.596 22:06:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:53.596 22:06:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:53.596 { 00:14:53.596 "params": { 00:14:53.596 "name": "Nvme$subsystem", 00:14:53.596 "trtype": "$TEST_TRANSPORT", 00:14:53.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:53.596 "adrfam": "ipv4", 00:14:53.596 "trsvcid": "$NVMF_PORT", 00:14:53.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:53.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:53.596 "hdgst": ${hdgst:-false}, 00:14:53.596 "ddgst": ${ddgst:-false} 00:14:53.596 }, 00:14:53.596 "method": "bdev_nvme_attach_controller" 00:14:53.596 } 00:14:53.596 EOF 00:14:53.596 )") 00:14:53.596 22:06:35 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:53.596 22:06:35 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:53.596 22:06:35 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3930360 00:14:53.596 22:06:35 -- nvmf/common.sh@521 -- # config=() 00:14:53.596 22:06:35 -- nvmf/common.sh@521 -- # local subsystem config 00:14:53.596 22:06:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:53.596 22:06:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:53.596 { 00:14:53.596 "params": { 00:14:53.596 "name": "Nvme$subsystem", 00:14:53.596 "trtype": "$TEST_TRANSPORT", 00:14:53.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:53.596 "adrfam": "ipv4", 00:14:53.596 "trsvcid": "$NVMF_PORT", 00:14:53.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:53.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:53.596 "hdgst": ${hdgst:-false}, 00:14:53.596 "ddgst": ${ddgst:-false} 00:14:53.596 }, 00:14:53.596 "method": "bdev_nvme_attach_controller" 00:14:53.596 } 00:14:53.596 EOF 00:14:53.596 )") 00:14:53.596 22:06:35 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:53.596 22:06:35 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:53.596 22:06:35 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3930363 00:14:53.596 22:06:35 -- nvmf/common.sh@543 -- # cat 00:14:53.596 22:06:35 -- target/bdev_io_wait.sh@35 -- # sync 00:14:53.596 22:06:35 -- nvmf/common.sh@521 -- # config=() 00:14:53.596 22:06:35 -- nvmf/common.sh@521 -- # local subsystem config 00:14:53.596 22:06:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:53.596 22:06:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:53.596 { 00:14:53.596 "params": { 00:14:53.596 "name": "Nvme$subsystem", 00:14:53.596 "trtype": "$TEST_TRANSPORT", 00:14:53.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:53.596 "adrfam": "ipv4", 00:14:53.596 "trsvcid": "$NVMF_PORT", 00:14:53.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:53.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:53.597 "hdgst": ${hdgst:-false}, 00:14:53.597 "ddgst": ${ddgst:-false} 00:14:53.597 }, 00:14:53.597 "method": "bdev_nvme_attach_controller" 00:14:53.597 } 00:14:53.597 EOF 00:14:53.597 )") 00:14:53.597 22:06:35 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:53.597 22:06:35 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:53.597 22:06:35 -- nvmf/common.sh@543 -- # cat 00:14:53.597 22:06:35 -- nvmf/common.sh@521 -- # config=() 00:14:53.597 22:06:35 -- nvmf/common.sh@521 -- # local subsystem config 00:14:53.597 22:06:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:53.597 22:06:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:53.597 { 00:14:53.597 "params": { 00:14:53.597 "name": "Nvme$subsystem", 00:14:53.597 "trtype": "$TEST_TRANSPORT", 00:14:53.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:53.597 "adrfam": "ipv4", 00:14:53.597 "trsvcid": "$NVMF_PORT", 00:14:53.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:53.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:53.597 "hdgst": ${hdgst:-false}, 00:14:53.597 "ddgst": ${ddgst:-false} 00:14:53.597 }, 00:14:53.597 "method": "bdev_nvme_attach_controller" 00:14:53.597 } 00:14:53.597 EOF 00:14:53.597 )") 00:14:53.597 22:06:35 -- nvmf/common.sh@543 -- # cat 00:14:53.597 22:06:35 -- target/bdev_io_wait.sh@37 -- # wait 3930356 00:14:53.597 22:06:35 -- nvmf/common.sh@543 -- # cat 00:14:53.597 22:06:35 -- nvmf/common.sh@545 -- # jq . 00:14:53.597 22:06:35 -- nvmf/common.sh@545 -- # jq . 00:14:53.597 22:06:35 -- nvmf/common.sh@545 -- # jq . 00:14:53.597 22:06:35 -- nvmf/common.sh@546 -- # IFS=, 00:14:53.597 22:06:35 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:53.597 "params": { 00:14:53.597 "name": "Nvme1", 00:14:53.597 "trtype": "tcp", 00:14:53.597 "traddr": "10.0.0.2", 00:14:53.597 "adrfam": "ipv4", 00:14:53.597 "trsvcid": "4420", 00:14:53.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:53.597 "hdgst": false, 00:14:53.597 "ddgst": false 00:14:53.597 }, 00:14:53.597 "method": "bdev_nvme_attach_controller" 00:14:53.597 }' 00:14:53.597 22:06:35 -- nvmf/common.sh@546 -- # IFS=, 00:14:53.597 22:06:35 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:53.597 "params": { 00:14:53.597 "name": "Nvme1", 00:14:53.597 "trtype": "tcp", 00:14:53.597 "traddr": "10.0.0.2", 00:14:53.597 "adrfam": "ipv4", 00:14:53.597 "trsvcid": "4420", 00:14:53.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:53.597 "hdgst": false, 00:14:53.597 "ddgst": false 00:14:53.597 }, 00:14:53.597 "method": "bdev_nvme_attach_controller" 00:14:53.597 }' 00:14:53.597 22:06:35 -- nvmf/common.sh@545 -- # jq . 00:14:53.597 22:06:35 -- nvmf/common.sh@546 -- # IFS=, 00:14:53.597 22:06:35 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:53.597 "params": { 00:14:53.597 "name": "Nvme1", 00:14:53.597 "trtype": "tcp", 00:14:53.597 "traddr": "10.0.0.2", 00:14:53.597 "adrfam": "ipv4", 00:14:53.597 "trsvcid": "4420", 00:14:53.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:53.597 "hdgst": false, 00:14:53.597 "ddgst": false 00:14:53.597 }, 00:14:53.597 "method": "bdev_nvme_attach_controller" 00:14:53.597 }' 00:14:53.597 22:06:35 -- nvmf/common.sh@546 -- # IFS=, 00:14:53.597 22:06:35 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:53.597 "params": { 00:14:53.597 "name": "Nvme1", 00:14:53.597 "trtype": "tcp", 00:14:53.597 "traddr": "10.0.0.2", 00:14:53.597 "adrfam": "ipv4", 00:14:53.597 "trsvcid": "4420", 00:14:53.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:53.597 "hdgst": false, 00:14:53.597 "ddgst": false 00:14:53.597 }, 00:14:53.597 "method": "bdev_nvme_attach_controller" 00:14:53.597 }' 00:14:53.597 [2024-04-24 22:06:35.764197] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:53.597 [2024-04-24 22:06:35.764197] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:53.597 [2024-04-24 22:06:35.764293] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:53.597 [2024-04-24 22:06:35.764275] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:53.597 [2024-04-24 22:06:35.764274] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:53.597 [2024-04-24 22:06:35.764299] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:53.597 [2024-04-24 22:06:35.764356] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-24 22:06:35.764357] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:53.597 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:53.597 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.853 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.853 [2024-04-24 22:06:35.961906] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.853 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.853 [2024-04-24 22:06:36.065892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:53.853 [2024-04-24 22:06:36.071568] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.111 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.111 [2024-04-24 22:06:36.149082] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.111 [2024-04-24 22:06:36.178818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:54.111 [2024-04-24 22:06:36.226476] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.111 [2024-04-24 22:06:36.249756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:14:54.111 [2024-04-24 22:06:36.324650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:54.369 Running I/O for 1 seconds... 00:14:54.369 Running I/O for 1 seconds... 00:14:54.369 Running I/O for 1 seconds... 00:14:54.628 Running I/O for 1 seconds... 00:14:55.569 00:14:55.569 Latency(us) 00:14:55.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.569 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:55.569 Nvme1n1 : 1.01 9151.96 35.75 0.00 0.00 13920.79 8932.31 21165.70 00:14:55.569 =================================================================================================================== 00:14:55.569 Total : 9151.96 35.75 0.00 0.00 13920.79 8932.31 21165.70 00:14:55.569 00:14:55.569 Latency(us) 00:14:55.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.569 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:55.569 Nvme1n1 : 1.01 8082.45 31.57 0.00 0.00 15762.72 6505.05 24272.59 00:14:55.569 =================================================================================================================== 00:14:55.569 Total : 8082.45 31.57 0.00 0.00 15762.72 6505.05 24272.59 00:14:55.569 00:14:55.569 Latency(us) 00:14:55.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.569 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:55.569 Nvme1n1 : 1.01 7795.22 30.45 0.00 0.00 16347.09 6796.33 28350.39 00:14:55.569 =================================================================================================================== 00:14:55.569 Total : 7795.22 30.45 0.00 0.00 16347.09 6796.33 28350.39 00:14:55.569 00:14:55.569 Latency(us) 00:14:55.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.569 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:55.569 Nvme1n1 : 1.00 184403.88 720.33 0.00 0.00 691.29 273.07 831.34 00:14:55.569 =================================================================================================================== 00:14:55.569 Total : 184403.88 720.33 0.00 0.00 691.29 273.07 831.34 00:14:55.829 22:06:37 -- target/bdev_io_wait.sh@38 -- # wait 3930358 00:14:55.829 22:06:37 -- target/bdev_io_wait.sh@39 -- # wait 3930360 00:14:55.829 22:06:38 -- target/bdev_io_wait.sh@40 -- # wait 3930363 00:14:55.829 22:06:38 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:55.829 22:06:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:55.829 22:06:38 -- common/autotest_common.sh@10 -- # set +x 00:14:55.829 22:06:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:55.829 22:06:38 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:55.829 22:06:38 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:55.829 22:06:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:55.829 22:06:38 -- nvmf/common.sh@117 -- # sync 00:14:55.829 22:06:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:55.829 22:06:38 -- nvmf/common.sh@120 -- # set +e 00:14:55.829 22:06:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:55.829 22:06:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:55.829 rmmod nvme_tcp 00:14:55.829 rmmod nvme_fabrics 00:14:55.829 rmmod nvme_keyring 00:14:55.829 22:06:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:55.829 22:06:38 -- nvmf/common.sh@124 -- # set -e 00:14:55.829 22:06:38 -- nvmf/common.sh@125 -- # return 0 00:14:55.829 22:06:38 -- nvmf/common.sh@478 -- # '[' -n 3930324 ']' 00:14:55.829 22:06:38 -- nvmf/common.sh@479 -- # killprocess 3930324 00:14:55.829 22:06:38 -- common/autotest_common.sh@936 -- # '[' -z 3930324 ']' 00:14:55.829 22:06:38 -- common/autotest_common.sh@940 -- # kill -0 3930324 00:14:55.829 22:06:38 -- common/autotest_common.sh@941 -- # uname 00:14:55.829 22:06:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:55.829 22:06:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3930324 00:14:56.088 22:06:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:56.088 22:06:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:56.088 22:06:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3930324' 00:14:56.088 killing process with pid 3930324 00:14:56.088 22:06:38 -- common/autotest_common.sh@955 -- # kill 3930324 00:14:56.088 [2024-04-24 22:06:38.115689] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:56.088 22:06:38 -- common/autotest_common.sh@960 -- # wait 3930324 00:14:56.349 22:06:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:56.349 22:06:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:56.349 22:06:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:56.349 22:06:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:56.349 22:06:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:56.349 22:06:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.349 22:06:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.349 22:06:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.257 22:06:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:58.258 00:14:58.258 real 0m7.782s 00:14:58.258 user 0m18.104s 00:14:58.258 sys 0m3.939s 00:14:58.258 22:06:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:58.258 22:06:40 -- common/autotest_common.sh@10 -- # set +x 00:14:58.258 ************************************ 00:14:58.258 END TEST nvmf_bdev_io_wait 00:14:58.258 ************************************ 00:14:58.258 22:06:40 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:58.258 22:06:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:58.258 22:06:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:58.258 22:06:40 -- common/autotest_common.sh@10 -- # set +x 00:14:58.532 ************************************ 00:14:58.533 START TEST nvmf_queue_depth 00:14:58.533 ************************************ 00:14:58.533 22:06:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:58.533 * Looking for test storage... 00:14:58.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:58.533 22:06:40 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:58.533 22:06:40 -- nvmf/common.sh@7 -- # uname -s 00:14:58.533 22:06:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:58.533 22:06:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:58.533 22:06:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:58.533 22:06:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:58.533 22:06:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:58.533 22:06:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:58.533 22:06:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:58.533 22:06:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:58.533 22:06:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:58.533 22:06:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:58.533 22:06:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:58.533 22:06:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:58.533 22:06:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:58.533 22:06:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:58.533 22:06:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:58.533 22:06:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:58.533 22:06:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:58.533 22:06:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:58.533 22:06:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.533 22:06:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.533 22:06:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.533 22:06:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.533 22:06:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.534 22:06:40 -- paths/export.sh@5 -- # export PATH 00:14:58.534 22:06:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.534 22:06:40 -- nvmf/common.sh@47 -- # : 0 00:14:58.534 22:06:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:58.534 22:06:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:58.534 22:06:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:58.534 22:06:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:58.534 22:06:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:58.534 22:06:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:58.534 22:06:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:58.534 22:06:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:58.534 22:06:40 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:58.534 22:06:40 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:58.534 22:06:40 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:58.534 22:06:40 -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:58.534 22:06:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:58.534 22:06:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:58.534 22:06:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:58.534 22:06:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:58.534 22:06:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:58.534 22:06:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.534 22:06:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:58.534 22:06:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.534 22:06:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:58.534 22:06:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:58.534 22:06:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:58.534 22:06:40 -- common/autotest_common.sh@10 -- # set +x 00:15:01.114 22:06:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:01.115 22:06:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:01.115 22:06:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:01.115 22:06:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:01.115 22:06:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:01.115 22:06:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:01.115 22:06:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:01.115 22:06:42 -- nvmf/common.sh@295 -- # net_devs=() 00:15:01.115 22:06:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:01.115 22:06:42 -- nvmf/common.sh@296 -- # e810=() 00:15:01.115 22:06:42 -- nvmf/common.sh@296 -- # local -ga e810 00:15:01.115 22:06:42 -- nvmf/common.sh@297 -- # x722=() 00:15:01.115 22:06:42 -- nvmf/common.sh@297 -- # local -ga x722 00:15:01.115 22:06:42 -- nvmf/common.sh@298 -- # mlx=() 00:15:01.115 22:06:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:01.115 22:06:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:01.115 22:06:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:01.115 22:06:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:01.115 22:06:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:01.115 22:06:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:01.115 22:06:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:01.115 22:06:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:01.115 22:06:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:01.115 22:06:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:01.115 22:06:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:01.115 22:06:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:01.115 22:06:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:01.115 22:06:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:01.115 22:06:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:01.115 22:06:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:01.115 22:06:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:01.115 22:06:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:01.115 22:06:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:01.115 22:06:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:01.115 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:01.115 22:06:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:01.115 22:06:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:01.115 22:06:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.115 22:06:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.115 22:06:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:01.115 22:06:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:01.115 22:06:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:01.115 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:01.115 22:06:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:01.115 22:06:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:01.115 22:06:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.115 22:06:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.115 22:06:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:01.115 22:06:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:01.115 22:06:42 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:01.115 22:06:42 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:01.115 22:06:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:01.115 22:06:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.115 22:06:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:01.115 22:06:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.115 22:06:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:01.115 Found net devices under 0000:84:00.0: cvl_0_0 00:15:01.115 22:06:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.115 22:06:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:01.115 22:06:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.115 22:06:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:01.115 22:06:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.115 22:06:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:01.115 Found net devices under 0000:84:00.1: cvl_0_1 00:15:01.115 22:06:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.115 22:06:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:01.115 22:06:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:01.115 22:06:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:01.115 22:06:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:01.115 22:06:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:01.115 22:06:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:01.115 22:06:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:01.115 22:06:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:01.115 22:06:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:01.115 22:06:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:01.115 22:06:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:01.115 22:06:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:01.115 22:06:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:01.115 22:06:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:01.115 22:06:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:01.115 22:06:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:01.115 22:06:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:01.115 22:06:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:01.115 22:06:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:01.115 22:06:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:01.115 22:06:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:01.115 22:06:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:01.115 22:06:43 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:01.115 22:06:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:01.115 22:06:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:01.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:01.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:15:01.115 00:15:01.115 --- 10.0.0.2 ping statistics --- 00:15:01.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.115 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:15:01.115 22:06:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:01.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:01.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:15:01.115 00:15:01.115 --- 10.0.0.1 ping statistics --- 00:15:01.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.115 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:15:01.115 22:06:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:01.115 22:06:43 -- nvmf/common.sh@411 -- # return 0 00:15:01.115 22:06:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:01.115 22:06:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:01.115 22:06:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:01.115 22:06:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:01.115 22:06:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:01.115 22:06:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:01.115 22:06:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:01.115 22:06:43 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:01.115 22:06:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:01.115 22:06:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:01.115 22:06:43 -- common/autotest_common.sh@10 -- # set +x 00:15:01.115 22:06:43 -- nvmf/common.sh@470 -- # nvmfpid=3932666 00:15:01.115 22:06:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:01.115 22:06:43 -- nvmf/common.sh@471 -- # waitforlisten 3932666 00:15:01.115 22:06:43 -- common/autotest_common.sh@817 -- # '[' -z 3932666 ']' 00:15:01.115 22:06:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.115 22:06:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:01.115 22:06:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.115 22:06:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:01.115 22:06:43 -- common/autotest_common.sh@10 -- # set +x 00:15:01.115 [2024-04-24 22:06:43.113807] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:15:01.115 [2024-04-24 22:06:43.113901] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.115 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.115 [2024-04-24 22:06:43.199799] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.115 [2024-04-24 22:06:43.336316] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.115 [2024-04-24 22:06:43.336405] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.115 [2024-04-24 22:06:43.336437] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.115 [2024-04-24 22:06:43.336454] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.115 [2024-04-24 22:06:43.336468] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.115 [2024-04-24 22:06:43.336513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.374 22:06:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:01.374 22:06:43 -- common/autotest_common.sh@850 -- # return 0 00:15:01.374 22:06:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:01.374 22:06:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:01.374 22:06:43 -- common/autotest_common.sh@10 -- # set +x 00:15:01.374 22:06:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:01.374 22:06:43 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:01.374 22:06:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:01.374 22:06:43 -- common/autotest_common.sh@10 -- # set +x 00:15:01.374 [2024-04-24 22:06:43.508691] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:01.374 22:06:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:01.374 22:06:43 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:01.374 22:06:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:01.374 22:06:43 -- common/autotest_common.sh@10 -- # set +x 00:15:01.374 Malloc0 00:15:01.374 22:06:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:01.374 22:06:43 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:01.374 22:06:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:01.374 22:06:43 -- common/autotest_common.sh@10 -- # set +x 00:15:01.374 22:06:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:01.374 22:06:43 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:01.374 22:06:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:01.374 22:06:43 -- common/autotest_common.sh@10 -- # set +x 00:15:01.374 22:06:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:01.374 22:06:43 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:01.374 22:06:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:01.374 22:06:43 -- common/autotest_common.sh@10 -- # set +x 00:15:01.374 [2024-04-24 22:06:43.572367] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:01.374 [2024-04-24 22:06:43.572742] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:01.374 22:06:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:01.374 22:06:43 -- target/queue_depth.sh@30 -- # bdevperf_pid=3932744 00:15:01.374 22:06:43 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:01.374 22:06:43 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:01.374 22:06:43 -- target/queue_depth.sh@33 -- # waitforlisten 3932744 /var/tmp/bdevperf.sock 00:15:01.374 22:06:43 -- common/autotest_common.sh@817 -- # '[' -z 3932744 ']' 00:15:01.374 22:06:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:01.374 22:06:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:01.374 22:06:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:01.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:01.374 22:06:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:01.374 22:06:43 -- common/autotest_common.sh@10 -- # set +x 00:15:01.374 [2024-04-24 22:06:43.628954] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:15:01.374 [2024-04-24 22:06:43.629043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3932744 ] 00:15:01.632 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.632 [2024-04-24 22:06:43.702633] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.632 [2024-04-24 22:06:43.820374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.891 22:06:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:01.891 22:06:43 -- common/autotest_common.sh@850 -- # return 0 00:15:01.891 22:06:43 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:01.891 22:06:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:01.891 22:06:43 -- common/autotest_common.sh@10 -- # set +x 00:15:02.149 NVMe0n1 00:15:02.149 22:06:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:02.149 22:06:44 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:02.149 Running I/O for 10 seconds... 00:15:14.349 00:15:14.349 Latency(us) 00:15:14.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.349 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:14.349 Verification LBA range: start 0x0 length 0x4000 00:15:14.349 NVMe0n1 : 10.09 8119.08 31.72 0.00 0.00 125584.45 25049.32 81167.55 00:15:14.349 =================================================================================================================== 00:15:14.349 Total : 8119.08 31.72 0.00 0.00 125584.45 25049.32 81167.55 00:15:14.349 0 00:15:14.349 22:06:54 -- target/queue_depth.sh@39 -- # killprocess 3932744 00:15:14.349 22:06:54 -- common/autotest_common.sh@936 -- # '[' -z 3932744 ']' 00:15:14.349 22:06:54 -- common/autotest_common.sh@940 -- # kill -0 3932744 00:15:14.349 22:06:54 -- common/autotest_common.sh@941 -- # uname 00:15:14.350 22:06:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:14.350 22:06:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3932744 00:15:14.350 22:06:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:14.350 22:06:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:14.350 22:06:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3932744' 00:15:14.350 killing process with pid 3932744 00:15:14.350 22:06:54 -- common/autotest_common.sh@955 -- # kill 3932744 00:15:14.350 Received shutdown signal, test time was about 10.000000 seconds 00:15:14.350 00:15:14.350 Latency(us) 00:15:14.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.350 =================================================================================================================== 00:15:14.350 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:14.350 22:06:54 -- common/autotest_common.sh@960 -- # wait 3932744 00:15:14.350 22:06:54 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:14.350 22:06:54 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:14.350 22:06:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:14.350 22:06:54 -- nvmf/common.sh@117 -- # sync 00:15:14.350 22:06:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:14.350 22:06:54 -- nvmf/common.sh@120 -- # set +e 00:15:14.350 22:06:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:14.350 22:06:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:14.350 rmmod nvme_tcp 00:15:14.350 rmmod nvme_fabrics 00:15:14.350 rmmod nvme_keyring 00:15:14.350 22:06:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:14.350 22:06:54 -- nvmf/common.sh@124 -- # set -e 00:15:14.350 22:06:54 -- nvmf/common.sh@125 -- # return 0 00:15:14.350 22:06:54 -- nvmf/common.sh@478 -- # '[' -n 3932666 ']' 00:15:14.350 22:06:54 -- nvmf/common.sh@479 -- # killprocess 3932666 00:15:14.350 22:06:54 -- common/autotest_common.sh@936 -- # '[' -z 3932666 ']' 00:15:14.350 22:06:54 -- common/autotest_common.sh@940 -- # kill -0 3932666 00:15:14.350 22:06:54 -- common/autotest_common.sh@941 -- # uname 00:15:14.350 22:06:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:14.350 22:06:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3932666 00:15:14.350 22:06:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:14.350 22:06:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:14.350 22:06:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3932666' 00:15:14.350 killing process with pid 3932666 00:15:14.350 22:06:54 -- common/autotest_common.sh@955 -- # kill 3932666 00:15:14.350 [2024-04-24 22:06:54.891723] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:14.350 22:06:54 -- common/autotest_common.sh@960 -- # wait 3932666 00:15:14.350 22:06:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:14.350 22:06:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:14.350 22:06:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:14.350 22:06:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:14.350 22:06:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:14.350 22:06:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.350 22:06:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:14.350 22:06:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.285 22:06:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:15.285 00:15:15.285 real 0m16.699s 00:15:15.285 user 0m23.075s 00:15:15.285 sys 0m3.599s 00:15:15.285 22:06:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:15.285 22:06:57 -- common/autotest_common.sh@10 -- # set +x 00:15:15.285 ************************************ 00:15:15.285 END TEST nvmf_queue_depth 00:15:15.285 ************************************ 00:15:15.285 22:06:57 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:15.285 22:06:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:15.285 22:06:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:15.285 22:06:57 -- common/autotest_common.sh@10 -- # set +x 00:15:15.285 ************************************ 00:15:15.285 START TEST nvmf_multipath 00:15:15.285 ************************************ 00:15:15.285 22:06:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:15.285 * Looking for test storage... 00:15:15.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:15.285 22:06:57 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:15.285 22:06:57 -- nvmf/common.sh@7 -- # uname -s 00:15:15.285 22:06:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:15.285 22:06:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:15.285 22:06:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:15.285 22:06:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:15.285 22:06:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:15.285 22:06:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:15.285 22:06:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:15.285 22:06:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:15.285 22:06:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:15.285 22:06:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:15.285 22:06:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:15.285 22:06:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:15.285 22:06:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:15.285 22:06:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:15.285 22:06:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:15.285 22:06:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:15.285 22:06:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:15.285 22:06:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:15.285 22:06:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:15.285 22:06:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:15.286 22:06:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.286 22:06:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.286 22:06:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.286 22:06:57 -- paths/export.sh@5 -- # export PATH 00:15:15.286 22:06:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.286 22:06:57 -- nvmf/common.sh@47 -- # : 0 00:15:15.286 22:06:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:15.286 22:06:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:15.286 22:06:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:15.286 22:06:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:15.286 22:06:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:15.286 22:06:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:15.286 22:06:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:15.286 22:06:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:15.286 22:06:57 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:15.286 22:06:57 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:15.286 22:06:57 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:15.286 22:06:57 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:15.286 22:06:57 -- target/multipath.sh@43 -- # nvmftestinit 00:15:15.286 22:06:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:15.286 22:06:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:15.286 22:06:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:15.286 22:06:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:15.286 22:06:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:15.286 22:06:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.286 22:06:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.286 22:06:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.286 22:06:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:15.286 22:06:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:15.286 22:06:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:15.286 22:06:57 -- common/autotest_common.sh@10 -- # set +x 00:15:17.815 22:06:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:17.815 22:06:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:17.815 22:06:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:17.815 22:06:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:17.815 22:06:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:17.815 22:06:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:17.815 22:06:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:17.815 22:06:59 -- nvmf/common.sh@295 -- # net_devs=() 00:15:17.815 22:06:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:17.815 22:06:59 -- nvmf/common.sh@296 -- # e810=() 00:15:17.815 22:06:59 -- nvmf/common.sh@296 -- # local -ga e810 00:15:17.815 22:06:59 -- nvmf/common.sh@297 -- # x722=() 00:15:17.815 22:06:59 -- nvmf/common.sh@297 -- # local -ga x722 00:15:17.815 22:06:59 -- nvmf/common.sh@298 -- # mlx=() 00:15:17.815 22:06:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:17.815 22:06:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:17.815 22:06:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:17.815 22:06:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:17.815 22:06:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:17.815 22:06:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:17.815 22:06:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:17.815 22:06:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:17.815 22:06:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:17.815 22:06:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:17.815 22:06:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:17.815 22:06:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:17.815 22:06:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:17.815 22:06:59 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:17.815 22:06:59 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:17.815 22:06:59 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:17.815 22:06:59 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:17.815 22:06:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:17.815 22:06:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:17.815 22:06:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:17.815 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:17.815 22:06:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:17.815 22:06:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:17.815 22:06:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.816 22:06:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.816 22:06:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:17.816 22:06:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:17.816 22:06:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:17.816 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:17.816 22:06:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:17.816 22:06:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:17.816 22:06:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.816 22:06:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.816 22:06:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:17.816 22:06:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:17.816 22:06:59 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:17.816 22:06:59 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:17.816 22:06:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:17.816 22:06:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.816 22:06:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:17.816 22:06:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.816 22:06:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:17.816 Found net devices under 0000:84:00.0: cvl_0_0 00:15:17.816 22:06:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.816 22:06:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:17.816 22:06:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.816 22:06:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:17.816 22:06:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.816 22:06:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:17.816 Found net devices under 0000:84:00.1: cvl_0_1 00:15:17.816 22:06:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.816 22:06:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:17.816 22:06:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:17.816 22:06:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:17.816 22:06:59 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:17.816 22:06:59 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:17.816 22:06:59 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.816 22:06:59 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:17.816 22:06:59 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:17.816 22:06:59 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:17.816 22:06:59 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:17.816 22:06:59 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:17.816 22:06:59 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:17.816 22:06:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:17.816 22:06:59 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.816 22:06:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:17.816 22:06:59 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:17.816 22:06:59 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:17.816 22:06:59 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:17.816 22:06:59 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:17.816 22:06:59 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:17.816 22:06:59 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:17.816 22:06:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:17.816 22:06:59 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:17.816 22:06:59 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:17.816 22:06:59 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:17.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:15:17.816 00:15:17.816 --- 10.0.0.2 ping statistics --- 00:15:17.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.816 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:15:17.816 22:06:59 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:17.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:15:17.816 00:15:17.816 --- 10.0.0.1 ping statistics --- 00:15:17.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.816 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:15:17.816 22:06:59 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.816 22:06:59 -- nvmf/common.sh@411 -- # return 0 00:15:17.816 22:06:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:17.816 22:06:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.816 22:06:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:17.816 22:06:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:17.816 22:06:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.816 22:06:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:17.816 22:06:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:17.816 22:06:59 -- target/multipath.sh@45 -- # '[' -z ']' 00:15:17.816 22:06:59 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:17.816 only one NIC for nvmf test 00:15:17.816 22:06:59 -- target/multipath.sh@47 -- # nvmftestfini 00:15:17.816 22:06:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:17.816 22:06:59 -- nvmf/common.sh@117 -- # sync 00:15:17.816 22:06:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:17.816 22:06:59 -- nvmf/common.sh@120 -- # set +e 00:15:17.816 22:06:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:17.816 22:06:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:17.816 rmmod nvme_tcp 00:15:17.816 rmmod nvme_fabrics 00:15:17.816 rmmod nvme_keyring 00:15:17.816 22:06:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:17.816 22:06:59 -- nvmf/common.sh@124 -- # set -e 00:15:17.816 22:06:59 -- nvmf/common.sh@125 -- # return 0 00:15:17.816 22:06:59 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:15:17.816 22:06:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:17.816 22:06:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:17.816 22:06:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:17.816 22:06:59 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:17.816 22:06:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:17.816 22:06:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.816 22:06:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.816 22:06:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.714 22:07:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:19.714 22:07:01 -- target/multipath.sh@48 -- # exit 0 00:15:19.714 22:07:01 -- target/multipath.sh@1 -- # nvmftestfini 00:15:19.714 22:07:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:19.714 22:07:01 -- nvmf/common.sh@117 -- # sync 00:15:19.714 22:07:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:19.714 22:07:01 -- nvmf/common.sh@120 -- # set +e 00:15:19.714 22:07:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:19.714 22:07:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:19.714 22:07:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:19.714 22:07:01 -- nvmf/common.sh@124 -- # set -e 00:15:19.714 22:07:01 -- nvmf/common.sh@125 -- # return 0 00:15:19.714 22:07:01 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:15:19.714 22:07:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:19.714 22:07:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:19.714 22:07:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:19.714 22:07:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:19.714 22:07:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:19.714 22:07:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.714 22:07:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:19.714 22:07:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.714 22:07:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:19.714 00:15:19.714 real 0m4.572s 00:15:19.714 user 0m0.828s 00:15:19.714 sys 0m1.746s 00:15:19.714 22:07:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:19.714 22:07:01 -- common/autotest_common.sh@10 -- # set +x 00:15:19.714 ************************************ 00:15:19.714 END TEST nvmf_multipath 00:15:19.714 ************************************ 00:15:19.972 22:07:01 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:19.972 22:07:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:19.972 22:07:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:19.972 22:07:01 -- common/autotest_common.sh@10 -- # set +x 00:15:19.972 ************************************ 00:15:19.972 START TEST nvmf_zcopy 00:15:19.972 ************************************ 00:15:19.973 22:07:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:19.973 * Looking for test storage... 00:15:19.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:19.973 22:07:02 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:19.973 22:07:02 -- nvmf/common.sh@7 -- # uname -s 00:15:19.973 22:07:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:19.973 22:07:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:19.973 22:07:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:19.973 22:07:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:19.973 22:07:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:19.973 22:07:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:19.973 22:07:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:19.973 22:07:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:19.973 22:07:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:19.973 22:07:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:19.973 22:07:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:19.973 22:07:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:19.973 22:07:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:19.973 22:07:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:19.973 22:07:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:19.973 22:07:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:19.973 22:07:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:19.973 22:07:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:19.973 22:07:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:19.973 22:07:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:19.973 22:07:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.973 22:07:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.973 22:07:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.973 22:07:02 -- paths/export.sh@5 -- # export PATH 00:15:19.973 22:07:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.973 22:07:02 -- nvmf/common.sh@47 -- # : 0 00:15:19.973 22:07:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:19.973 22:07:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:19.973 22:07:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:19.973 22:07:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:19.973 22:07:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:19.973 22:07:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:19.973 22:07:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:19.973 22:07:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:19.973 22:07:02 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:19.973 22:07:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:19.973 22:07:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:19.973 22:07:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:19.973 22:07:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:19.973 22:07:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:19.973 22:07:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.973 22:07:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:19.973 22:07:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.973 22:07:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:19.973 22:07:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:19.973 22:07:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:19.973 22:07:02 -- common/autotest_common.sh@10 -- # set +x 00:15:22.502 22:07:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:22.502 22:07:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:22.502 22:07:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:22.502 22:07:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:22.502 22:07:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:22.502 22:07:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:22.502 22:07:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:22.502 22:07:04 -- nvmf/common.sh@295 -- # net_devs=() 00:15:22.502 22:07:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:22.502 22:07:04 -- nvmf/common.sh@296 -- # e810=() 00:15:22.502 22:07:04 -- nvmf/common.sh@296 -- # local -ga e810 00:15:22.502 22:07:04 -- nvmf/common.sh@297 -- # x722=() 00:15:22.502 22:07:04 -- nvmf/common.sh@297 -- # local -ga x722 00:15:22.502 22:07:04 -- nvmf/common.sh@298 -- # mlx=() 00:15:22.502 22:07:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:22.502 22:07:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:22.502 22:07:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:22.502 22:07:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:22.502 22:07:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:22.502 22:07:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:22.502 22:07:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:22.502 22:07:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:22.502 22:07:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:22.502 22:07:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:22.502 22:07:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:22.502 22:07:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:22.502 22:07:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:22.502 22:07:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:22.502 22:07:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:22.502 22:07:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:22.502 22:07:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:22.502 22:07:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:22.502 22:07:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.502 22:07:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:22.502 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:22.502 22:07:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.502 22:07:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.502 22:07:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.502 22:07:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.502 22:07:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.502 22:07:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.502 22:07:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:22.502 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:22.502 22:07:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.502 22:07:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.502 22:07:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.502 22:07:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.502 22:07:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.502 22:07:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:22.502 22:07:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:22.502 22:07:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:22.502 22:07:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.502 22:07:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.502 22:07:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:22.502 22:07:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.502 22:07:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:22.502 Found net devices under 0000:84:00.0: cvl_0_0 00:15:22.502 22:07:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.502 22:07:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.502 22:07:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.502 22:07:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:22.502 22:07:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.502 22:07:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:22.502 Found net devices under 0000:84:00.1: cvl_0_1 00:15:22.502 22:07:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.502 22:07:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:22.502 22:07:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:22.502 22:07:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:22.502 22:07:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:22.502 22:07:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:22.502 22:07:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:22.502 22:07:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:22.502 22:07:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:22.502 22:07:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:22.502 22:07:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:22.502 22:07:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:22.502 22:07:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:22.502 22:07:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:22.502 22:07:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:22.502 22:07:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:22.502 22:07:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:22.502 22:07:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:22.502 22:07:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:22.502 22:07:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:22.502 22:07:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:22.502 22:07:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:22.502 22:07:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:22.502 22:07:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:22.502 22:07:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:22.502 22:07:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:22.502 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:22.502 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:15:22.502 00:15:22.502 --- 10.0.0.2 ping statistics --- 00:15:22.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.502 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:15:22.502 22:07:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:22.502 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:22.502 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:15:22.502 00:15:22.502 --- 10.0.0.1 ping statistics --- 00:15:22.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.502 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:15:22.502 22:07:04 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:22.502 22:07:04 -- nvmf/common.sh@411 -- # return 0 00:15:22.502 22:07:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:22.502 22:07:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:22.502 22:07:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:22.502 22:07:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:22.502 22:07:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:22.502 22:07:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:22.502 22:07:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:22.502 22:07:04 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:22.502 22:07:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:22.502 22:07:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:22.502 22:07:04 -- common/autotest_common.sh@10 -- # set +x 00:15:22.502 22:07:04 -- nvmf/common.sh@470 -- # nvmfpid=3937975 00:15:22.502 22:07:04 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:22.502 22:07:04 -- nvmf/common.sh@471 -- # waitforlisten 3937975 00:15:22.502 22:07:04 -- common/autotest_common.sh@817 -- # '[' -z 3937975 ']' 00:15:22.502 22:07:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.502 22:07:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:22.502 22:07:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.502 22:07:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:22.502 22:07:04 -- common/autotest_common.sh@10 -- # set +x 00:15:22.502 [2024-04-24 22:07:04.690217] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:15:22.502 [2024-04-24 22:07:04.690316] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.502 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.760 [2024-04-24 22:07:04.765751] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.760 [2024-04-24 22:07:04.888028] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:22.760 [2024-04-24 22:07:04.888103] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:22.760 [2024-04-24 22:07:04.888119] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:22.760 [2024-04-24 22:07:04.888132] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:22.760 [2024-04-24 22:07:04.888145] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.760 [2024-04-24 22:07:04.888188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.760 22:07:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:22.760 22:07:05 -- common/autotest_common.sh@850 -- # return 0 00:15:22.760 22:07:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:22.760 22:07:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:22.760 22:07:05 -- common/autotest_common.sh@10 -- # set +x 00:15:23.018 22:07:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:23.018 22:07:05 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:23.018 22:07:05 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:23.018 22:07:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:23.018 22:07:05 -- common/autotest_common.sh@10 -- # set +x 00:15:23.018 [2024-04-24 22:07:05.047156] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:23.018 22:07:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:23.018 22:07:05 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:23.018 22:07:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:23.018 22:07:05 -- common/autotest_common.sh@10 -- # set +x 00:15:23.018 22:07:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:23.018 22:07:05 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:23.018 22:07:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:23.018 22:07:05 -- common/autotest_common.sh@10 -- # set +x 00:15:23.018 [2024-04-24 22:07:05.063110] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:23.018 [2024-04-24 22:07:05.063415] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:23.018 22:07:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:23.018 22:07:05 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:23.018 22:07:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:23.018 22:07:05 -- common/autotest_common.sh@10 -- # set +x 00:15:23.018 22:07:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:23.018 22:07:05 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:23.018 22:07:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:23.018 22:07:05 -- common/autotest_common.sh@10 -- # set +x 00:15:23.018 malloc0 00:15:23.018 22:07:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:23.018 22:07:05 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:23.018 22:07:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:23.018 22:07:05 -- common/autotest_common.sh@10 -- # set +x 00:15:23.018 22:07:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:23.018 22:07:05 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:23.018 22:07:05 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:23.018 22:07:05 -- nvmf/common.sh@521 -- # config=() 00:15:23.018 22:07:05 -- nvmf/common.sh@521 -- # local subsystem config 00:15:23.018 22:07:05 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:23.018 22:07:05 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:23.018 { 00:15:23.018 "params": { 00:15:23.018 "name": "Nvme$subsystem", 00:15:23.018 "trtype": "$TEST_TRANSPORT", 00:15:23.018 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:23.018 "adrfam": "ipv4", 00:15:23.018 "trsvcid": "$NVMF_PORT", 00:15:23.018 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:23.018 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:23.018 "hdgst": ${hdgst:-false}, 00:15:23.018 "ddgst": ${ddgst:-false} 00:15:23.018 }, 00:15:23.018 "method": "bdev_nvme_attach_controller" 00:15:23.018 } 00:15:23.018 EOF 00:15:23.018 )") 00:15:23.018 22:07:05 -- nvmf/common.sh@543 -- # cat 00:15:23.018 22:07:05 -- nvmf/common.sh@545 -- # jq . 00:15:23.018 22:07:05 -- nvmf/common.sh@546 -- # IFS=, 00:15:23.018 22:07:05 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:23.018 "params": { 00:15:23.018 "name": "Nvme1", 00:15:23.018 "trtype": "tcp", 00:15:23.018 "traddr": "10.0.0.2", 00:15:23.018 "adrfam": "ipv4", 00:15:23.018 "trsvcid": "4420", 00:15:23.018 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.018 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:23.018 "hdgst": false, 00:15:23.018 "ddgst": false 00:15:23.018 }, 00:15:23.018 "method": "bdev_nvme_attach_controller" 00:15:23.018 }' 00:15:23.018 [2024-04-24 22:07:05.149263] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:15:23.018 [2024-04-24 22:07:05.149343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3938114 ] 00:15:23.018 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.018 [2024-04-24 22:07:05.219769] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.277 [2024-04-24 22:07:05.342266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.535 Running I/O for 10 seconds... 00:15:33.512 00:15:33.512 Latency(us) 00:15:33.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.512 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:33.512 Verification LBA range: start 0x0 length 0x1000 00:15:33.512 Nvme1n1 : 10.02 5311.70 41.50 0.00 0.00 24031.64 1808.31 35340.89 00:15:33.512 =================================================================================================================== 00:15:33.512 Total : 5311.70 41.50 0.00 0.00 24031.64 1808.31 35340.89 00:15:34.118 22:07:16 -- target/zcopy.sh@39 -- # perfpid=3939310 00:15:34.118 22:07:16 -- target/zcopy.sh@41 -- # xtrace_disable 00:15:34.118 22:07:16 -- common/autotest_common.sh@10 -- # set +x 00:15:34.118 22:07:16 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:34.118 22:07:16 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:34.118 22:07:16 -- nvmf/common.sh@521 -- # config=() 00:15:34.118 22:07:16 -- nvmf/common.sh@521 -- # local subsystem config 00:15:34.118 22:07:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:34.118 22:07:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:34.118 { 00:15:34.118 "params": { 00:15:34.118 "name": "Nvme$subsystem", 00:15:34.118 "trtype": "$TEST_TRANSPORT", 00:15:34.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:34.118 "adrfam": "ipv4", 00:15:34.118 "trsvcid": "$NVMF_PORT", 00:15:34.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:34.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:34.118 "hdgst": ${hdgst:-false}, 00:15:34.118 "ddgst": ${ddgst:-false} 00:15:34.118 }, 00:15:34.118 "method": "bdev_nvme_attach_controller" 00:15:34.118 } 00:15:34.118 EOF 00:15:34.118 )") 00:15:34.118 [2024-04-24 22:07:16.035604] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.035649] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 22:07:16 -- nvmf/common.sh@543 -- # cat 00:15:34.118 22:07:16 -- nvmf/common.sh@545 -- # jq . 00:15:34.118 22:07:16 -- nvmf/common.sh@546 -- # IFS=, 00:15:34.118 22:07:16 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:34.118 "params": { 00:15:34.118 "name": "Nvme1", 00:15:34.118 "trtype": "tcp", 00:15:34.118 "traddr": "10.0.0.2", 00:15:34.118 "adrfam": "ipv4", 00:15:34.118 "trsvcid": "4420", 00:15:34.118 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:34.118 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:34.118 "hdgst": false, 00:15:34.118 "ddgst": false 00:15:34.118 }, 00:15:34.118 "method": "bdev_nvme_attach_controller" 00:15:34.118 }' 00:15:34.118 [2024-04-24 22:07:16.043558] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.043585] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.051577] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.051602] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.059599] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.059624] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.067621] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.067646] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.075644] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.075669] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.080990] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:15:34.118 [2024-04-24 22:07:16.081076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3939310 ] 00:15:34.118 [2024-04-24 22:07:16.083666] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.083693] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.091689] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.091714] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.099720] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.099744] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.107730] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.107756] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.115750] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.115774] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.118 [2024-04-24 22:07:16.123772] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.123797] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.131793] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.131818] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.139815] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.139840] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.147838] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.147863] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.155878] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.155904] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.155961] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.118 [2024-04-24 22:07:16.163916] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.163955] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.171926] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.171960] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.179929] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.179954] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.187953] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.187979] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.195974] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.195999] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.203995] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.204021] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.212018] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.212043] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.220043] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.220069] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.228083] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.228119] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.236084] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.236111] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.244104] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.244129] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.252127] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.252154] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.260150] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.260177] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.268172] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.268199] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.276193] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.276218] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.277410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.118 [2024-04-24 22:07:16.284215] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.284240] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.292257] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.292285] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.118 [2024-04-24 22:07:16.300287] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.118 [2024-04-24 22:07:16.300325] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.119 [2024-04-24 22:07:16.308308] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.119 [2024-04-24 22:07:16.308347] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.119 [2024-04-24 22:07:16.316335] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.119 [2024-04-24 22:07:16.316376] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.119 [2024-04-24 22:07:16.324365] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.119 [2024-04-24 22:07:16.324412] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.119 [2024-04-24 22:07:16.332376] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.119 [2024-04-24 22:07:16.332424] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.119 [2024-04-24 22:07:16.340404] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.119 [2024-04-24 22:07:16.340452] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.119 [2024-04-24 22:07:16.348403] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.119 [2024-04-24 22:07:16.348430] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.119 [2024-04-24 22:07:16.356457] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.119 [2024-04-24 22:07:16.356493] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.119 [2024-04-24 22:07:16.364480] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.119 [2024-04-24 22:07:16.364526] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.119 [2024-04-24 22:07:16.372489] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.119 [2024-04-24 22:07:16.372521] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.377 [2024-04-24 22:07:16.380486] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.377 [2024-04-24 22:07:16.380512] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.377 [2024-04-24 22:07:16.388515] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.377 [2024-04-24 22:07:16.388540] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.377 [2024-04-24 22:07:16.396547] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.377 [2024-04-24 22:07:16.396578] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.377 [2024-04-24 22:07:16.404560] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.377 [2024-04-24 22:07:16.404588] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.377 [2024-04-24 22:07:16.412585] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.377 [2024-04-24 22:07:16.412612] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.377 [2024-04-24 22:07:16.420610] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.377 [2024-04-24 22:07:16.420637] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.377 [2024-04-24 22:07:16.428630] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.377 [2024-04-24 22:07:16.428657] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.377 [2024-04-24 22:07:16.436649] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.377 [2024-04-24 22:07:16.436674] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.377 [2024-04-24 22:07:16.444670] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.377 [2024-04-24 22:07:16.444695] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.377 [2024-04-24 22:07:16.452694] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.377 [2024-04-24 22:07:16.452719] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.377 [2024-04-24 22:07:16.460718] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.377 [2024-04-24 22:07:16.460745] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.377 [2024-04-24 22:07:16.468743] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.377 [2024-04-24 22:07:16.468770] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.377 [2024-04-24 22:07:16.476763] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.377 [2024-04-24 22:07:16.476790] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.377 [2024-04-24 22:07:16.484783] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.377 [2024-04-24 22:07:16.484808] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.378 [2024-04-24 22:07:16.492808] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.378 [2024-04-24 22:07:16.492834] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.378 [2024-04-24 22:07:16.500829] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.378 [2024-04-24 22:07:16.500854] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.378 [2024-04-24 22:07:16.508852] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.378 [2024-04-24 22:07:16.508877] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.378 [2024-04-24 22:07:16.516882] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.378 [2024-04-24 22:07:16.516918] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.378 [2024-04-24 22:07:16.524900] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.378 [2024-04-24 22:07:16.524926] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.378 [2024-04-24 22:07:16.532924] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.378 [2024-04-24 22:07:16.532949] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.378 [2024-04-24 22:07:16.540947] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.378 [2024-04-24 22:07:16.540972] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.378 [2024-04-24 22:07:16.548968] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.378 [2024-04-24 22:07:16.548993] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.378 [2024-04-24 22:07:16.556996] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.378 [2024-04-24 22:07:16.557024] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.378 [2024-04-24 22:07:16.565019] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.378 [2024-04-24 22:07:16.565045] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.378 [2024-04-24 22:07:16.573040] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.378 [2024-04-24 22:07:16.573065] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.378 [2024-04-24 22:07:16.581064] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.378 [2024-04-24 22:07:16.581088] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.378 [2024-04-24 22:07:16.589088] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.378 [2024-04-24 22:07:16.589113] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.378 [2024-04-24 22:07:16.597110] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.378 [2024-04-24 22:07:16.597135] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.378 [2024-04-24 22:07:16.605132] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.378 [2024-04-24 22:07:16.605158] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.378 [2024-04-24 22:07:16.613154] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.378 [2024-04-24 22:07:16.613178] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.378 [2024-04-24 22:07:16.621191] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.378 [2024-04-24 22:07:16.621220] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.378 Running I/O for 5 seconds... 00:15:34.378 [2024-04-24 22:07:16.629203] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.378 [2024-04-24 22:07:16.629229] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.636 [2024-04-24 22:07:16.643720] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.636 [2024-04-24 22:07:16.643752] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.636 [2024-04-24 22:07:16.655685] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.636 [2024-04-24 22:07:16.655716] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.636 [2024-04-24 22:07:16.668307] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.636 [2024-04-24 22:07:16.668338] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.636 [2024-04-24 22:07:16.682702] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.636 [2024-04-24 22:07:16.682732] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.636 [2024-04-24 22:07:16.694339] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.636 [2024-04-24 22:07:16.694369] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.636 [2024-04-24 22:07:16.706653] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.636 [2024-04-24 22:07:16.706684] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.636 [2024-04-24 22:07:16.719599] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.636 [2024-04-24 22:07:16.719630] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.636 [2024-04-24 22:07:16.731703] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.636 [2024-04-24 22:07:16.731735] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.636 [2024-04-24 22:07:16.743527] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.636 [2024-04-24 22:07:16.743559] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.636 [2024-04-24 22:07:16.755347] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.636 [2024-04-24 22:07:16.755377] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.637 [2024-04-24 22:07:16.767596] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.637 [2024-04-24 22:07:16.767626] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.637 [2024-04-24 22:07:16.779953] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.637 [2024-04-24 22:07:16.779983] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.637 [2024-04-24 22:07:16.791649] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.637 [2024-04-24 22:07:16.791679] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.637 [2024-04-24 22:07:16.803618] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.637 [2024-04-24 22:07:16.803648] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.637 [2024-04-24 22:07:16.815776] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.637 [2024-04-24 22:07:16.815808] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.637 [2024-04-24 22:07:16.828022] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.637 [2024-04-24 22:07:16.828053] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.637 [2024-04-24 22:07:16.840441] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.637 [2024-04-24 22:07:16.840471] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.637 [2024-04-24 22:07:16.852476] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.637 [2024-04-24 22:07:16.852506] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.637 [2024-04-24 22:07:16.866331] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.637 [2024-04-24 22:07:16.866362] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.637 [2024-04-24 22:07:16.877564] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.637 [2024-04-24 22:07:16.877595] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.637 [2024-04-24 22:07:16.889511] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.637 [2024-04-24 22:07:16.889542] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.895 [2024-04-24 22:07:16.901692] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.895 [2024-04-24 22:07:16.901723] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.895 [2024-04-24 22:07:16.913936] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.895 [2024-04-24 22:07:16.913966] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.895 [2024-04-24 22:07:16.925646] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.895 [2024-04-24 22:07:16.925676] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.895 [2024-04-24 22:07:16.937675] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.895 [2024-04-24 22:07:16.937705] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.895 [2024-04-24 22:07:16.949309] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.895 [2024-04-24 22:07:16.949340] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.895 [2024-04-24 22:07:16.961045] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.895 [2024-04-24 22:07:16.961075] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.895 [2024-04-24 22:07:16.973148] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.895 [2024-04-24 22:07:16.973177] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.895 [2024-04-24 22:07:16.985033] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.895 [2024-04-24 22:07:16.985064] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.895 [2024-04-24 22:07:16.998995] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.895 [2024-04-24 22:07:16.999025] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.895 [2024-04-24 22:07:17.010542] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.895 [2024-04-24 22:07:17.010572] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.895 [2024-04-24 22:07:17.022716] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.895 [2024-04-24 22:07:17.022746] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.895 [2024-04-24 22:07:17.034798] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.895 [2024-04-24 22:07:17.034828] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.895 [2024-04-24 22:07:17.046873] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.895 [2024-04-24 22:07:17.046903] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.895 [2024-04-24 22:07:17.058899] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.895 [2024-04-24 22:07:17.058930] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.895 [2024-04-24 22:07:17.071193] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.895 [2024-04-24 22:07:17.071223] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.895 [2024-04-24 22:07:17.083074] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.895 [2024-04-24 22:07:17.083105] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.895 [2024-04-24 22:07:17.095188] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.895 [2024-04-24 22:07:17.095218] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.895 [2024-04-24 22:07:17.107287] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.895 [2024-04-24 22:07:17.107317] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.895 [2024-04-24 22:07:17.118924] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.896 [2024-04-24 22:07:17.118954] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.896 [2024-04-24 22:07:17.131078] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.896 [2024-04-24 22:07:17.131108] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.896 [2024-04-24 22:07:17.142750] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.896 [2024-04-24 22:07:17.142781] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.154 [2024-04-24 22:07:17.154682] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.154 [2024-04-24 22:07:17.154712] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.154 [2024-04-24 22:07:17.167000] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.154 [2024-04-24 22:07:17.167030] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.154 [2024-04-24 22:07:17.179140] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.154 [2024-04-24 22:07:17.179170] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.154 [2024-04-24 22:07:17.191699] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.154 [2024-04-24 22:07:17.191729] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.154 [2024-04-24 22:07:17.203482] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.154 [2024-04-24 22:07:17.203511] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.154 [2024-04-24 22:07:17.215515] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.154 [2024-04-24 22:07:17.215555] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.154 [2024-04-24 22:07:17.227517] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.154 [2024-04-24 22:07:17.227547] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.154 [2024-04-24 22:07:17.239380] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.154 [2024-04-24 22:07:17.239419] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.154 [2024-04-24 22:07:17.251029] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.154 [2024-04-24 22:07:17.251059] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.154 [2024-04-24 22:07:17.263419] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.154 [2024-04-24 22:07:17.263458] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.154 [2024-04-24 22:07:17.275794] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.154 [2024-04-24 22:07:17.275824] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.154 [2024-04-24 22:07:17.288156] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.154 [2024-04-24 22:07:17.288185] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.154 [2024-04-24 22:07:17.299634] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.154 [2024-04-24 22:07:17.299672] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.154 [2024-04-24 22:07:17.311608] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.154 [2024-04-24 22:07:17.311638] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.154 [2024-04-24 22:07:17.323555] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.154 [2024-04-24 22:07:17.323586] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.154 [2024-04-24 22:07:17.335545] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.154 [2024-04-24 22:07:17.335576] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.154 [2024-04-24 22:07:17.347356] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.154 [2024-04-24 22:07:17.347386] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.154 [2024-04-24 22:07:17.359008] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.154 [2024-04-24 22:07:17.359038] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.154 [2024-04-24 22:07:17.370777] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.154 [2024-04-24 22:07:17.370808] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.154 [2024-04-24 22:07:17.382157] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.154 [2024-04-24 22:07:17.382186] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.154 [2024-04-24 22:07:17.394049] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.154 [2024-04-24 22:07:17.394080] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.154 [2024-04-24 22:07:17.406488] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.154 [2024-04-24 22:07:17.406519] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.413 [2024-04-24 22:07:17.418539] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.413 [2024-04-24 22:07:17.418570] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.413 [2024-04-24 22:07:17.430540] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.413 [2024-04-24 22:07:17.430571] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.413 [2024-04-24 22:07:17.442634] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.413 [2024-04-24 22:07:17.442666] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.413 [2024-04-24 22:07:17.455001] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.413 [2024-04-24 22:07:17.455032] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.413 [2024-04-24 22:07:17.467081] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.413 [2024-04-24 22:07:17.467112] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.413 [2024-04-24 22:07:17.478737] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.413 [2024-04-24 22:07:17.478767] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.413 [2024-04-24 22:07:17.490803] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.413 [2024-04-24 22:07:17.490834] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.413 [2024-04-24 22:07:17.502366] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.413 [2024-04-24 22:07:17.502405] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.413 [2024-04-24 22:07:17.514514] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.413 [2024-04-24 22:07:17.514546] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.413 [2024-04-24 22:07:17.526172] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.413 [2024-04-24 22:07:17.526214] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.413 [2024-04-24 22:07:17.537980] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.413 [2024-04-24 22:07:17.538010] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.413 [2024-04-24 22:07:17.549614] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.413 [2024-04-24 22:07:17.549654] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.413 [2024-04-24 22:07:17.561484] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.413 [2024-04-24 22:07:17.561514] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.413 [2024-04-24 22:07:17.573237] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.413 [2024-04-24 22:07:17.573268] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.413 [2024-04-24 22:07:17.586728] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.413 [2024-04-24 22:07:17.586758] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.413 [2024-04-24 22:07:17.597916] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.413 [2024-04-24 22:07:17.597955] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.413 [2024-04-24 22:07:17.609663] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.413 [2024-04-24 22:07:17.609693] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.413 [2024-04-24 22:07:17.621892] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.413 [2024-04-24 22:07:17.621922] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.413 [2024-04-24 22:07:17.633634] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.413 [2024-04-24 22:07:17.633665] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.413 [2024-04-24 22:07:17.645644] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.413 [2024-04-24 22:07:17.645674] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.413 [2024-04-24 22:07:17.657420] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.413 [2024-04-24 22:07:17.657451] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.672 [2024-04-24 22:07:17.669556] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.672 [2024-04-24 22:07:17.669586] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.672 [2024-04-24 22:07:17.681705] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.672 [2024-04-24 22:07:17.681735] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.672 [2024-04-24 22:07:17.693644] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.672 [2024-04-24 22:07:17.693674] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.672 [2024-04-24 22:07:17.705747] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.672 [2024-04-24 22:07:17.705777] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.672 [2024-04-24 22:07:17.717806] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.672 [2024-04-24 22:07:17.717836] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.672 [2024-04-24 22:07:17.729810] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.672 [2024-04-24 22:07:17.729841] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.672 [2024-04-24 22:07:17.741152] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.672 [2024-04-24 22:07:17.741182] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.672 [2024-04-24 22:07:17.753039] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.672 [2024-04-24 22:07:17.753069] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.672 [2024-04-24 22:07:17.765245] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.672 [2024-04-24 22:07:17.765287] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.672 [2024-04-24 22:07:17.776755] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.672 [2024-04-24 22:07:17.776785] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.672 [2024-04-24 22:07:17.790587] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.672 [2024-04-24 22:07:17.790617] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.672 [2024-04-24 22:07:17.802030] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.672 [2024-04-24 22:07:17.802060] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.672 [2024-04-24 22:07:17.813880] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.672 [2024-04-24 22:07:17.813911] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.672 [2024-04-24 22:07:17.825898] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.672 [2024-04-24 22:07:17.825937] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.672 [2024-04-24 22:07:17.837569] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.672 [2024-04-24 22:07:17.837600] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.672 [2024-04-24 22:07:17.849525] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.672 [2024-04-24 22:07:17.849555] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.672 [2024-04-24 22:07:17.861488] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.672 [2024-04-24 22:07:17.861519] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.672 [2024-04-24 22:07:17.873577] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.672 [2024-04-24 22:07:17.873607] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.672 [2024-04-24 22:07:17.885094] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.672 [2024-04-24 22:07:17.885124] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.672 [2024-04-24 22:07:17.899117] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.672 [2024-04-24 22:07:17.899147] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.672 [2024-04-24 22:07:17.910618] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.672 [2024-04-24 22:07:17.910657] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.672 [2024-04-24 22:07:17.922660] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.672 [2024-04-24 22:07:17.922690] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.931 [2024-04-24 22:07:17.934817] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.931 [2024-04-24 22:07:17.934848] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.931 [2024-04-24 22:07:17.946796] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.931 [2024-04-24 22:07:17.946827] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.931 [2024-04-24 22:07:17.958636] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.931 [2024-04-24 22:07:17.958666] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.931 [2024-04-24 22:07:17.970388] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.931 [2024-04-24 22:07:17.970428] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.931 [2024-04-24 22:07:17.982333] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.931 [2024-04-24 22:07:17.982363] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.931 [2024-04-24 22:07:17.994389] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.931 [2024-04-24 22:07:17.994428] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.931 [2024-04-24 22:07:18.008269] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.931 [2024-04-24 22:07:18.008300] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.931 [2024-04-24 22:07:18.019704] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.931 [2024-04-24 22:07:18.019734] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.931 [2024-04-24 22:07:18.032229] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.931 [2024-04-24 22:07:18.032260] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.931 [2024-04-24 22:07:18.044183] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.931 [2024-04-24 22:07:18.044214] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.931 [2024-04-24 22:07:18.055830] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.931 [2024-04-24 22:07:18.055868] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.931 [2024-04-24 22:07:18.067752] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.931 [2024-04-24 22:07:18.067782] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.931 [2024-04-24 22:07:18.079821] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.931 [2024-04-24 22:07:18.079851] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.931 [2024-04-24 22:07:18.092110] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.931 [2024-04-24 22:07:18.092140] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.931 [2024-04-24 22:07:18.103989] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.931 [2024-04-24 22:07:18.104018] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.931 [2024-04-24 22:07:18.115809] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.931 [2024-04-24 22:07:18.115839] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.931 [2024-04-24 22:07:18.127788] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.931 [2024-04-24 22:07:18.127819] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.931 [2024-04-24 22:07:18.139776] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.931 [2024-04-24 22:07:18.139807] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.931 [2024-04-24 22:07:18.151738] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.931 [2024-04-24 22:07:18.151769] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.931 [2024-04-24 22:07:18.163770] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.931 [2024-04-24 22:07:18.163801] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.931 [2024-04-24 22:07:18.175616] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.931 [2024-04-24 22:07:18.175647] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.191 [2024-04-24 22:07:18.187895] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.191 [2024-04-24 22:07:18.187926] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.191 [2024-04-24 22:07:18.199788] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.192 [2024-04-24 22:07:18.199819] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.192 [2024-04-24 22:07:18.211949] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.192 [2024-04-24 22:07:18.211980] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.192 [2024-04-24 22:07:18.223812] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.192 [2024-04-24 22:07:18.223842] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.192 [2024-04-24 22:07:18.235658] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.192 [2024-04-24 22:07:18.235689] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.192 [2024-04-24 22:07:18.249691] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.192 [2024-04-24 22:07:18.249722] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.192 [2024-04-24 22:07:18.260739] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.192 [2024-04-24 22:07:18.260769] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.192 [2024-04-24 22:07:18.273463] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.192 [2024-04-24 22:07:18.273494] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.192 [2024-04-24 22:07:18.285774] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.192 [2024-04-24 22:07:18.285815] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.192 [2024-04-24 22:07:18.298098] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.192 [2024-04-24 22:07:18.298128] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.192 [2024-04-24 22:07:18.310216] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.192 [2024-04-24 22:07:18.310247] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.192 [2024-04-24 22:07:18.322547] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.192 [2024-04-24 22:07:18.322577] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.192 [2024-04-24 22:07:18.334672] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.192 [2024-04-24 22:07:18.334702] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.192 [2024-04-24 22:07:18.347020] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.192 [2024-04-24 22:07:18.347051] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.192 [2024-04-24 22:07:18.359219] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.192 [2024-04-24 22:07:18.359250] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.192 [2024-04-24 22:07:18.371356] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.192 [2024-04-24 22:07:18.371387] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.192 [2024-04-24 22:07:18.383428] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.192 [2024-04-24 22:07:18.383458] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.192 [2024-04-24 22:07:18.395814] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.192 [2024-04-24 22:07:18.395843] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.192 [2024-04-24 22:07:18.408323] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.192 [2024-04-24 22:07:18.408355] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.192 [2024-04-24 22:07:18.420592] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.192 [2024-04-24 22:07:18.420623] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.192 [2024-04-24 22:07:18.432430] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.192 [2024-04-24 22:07:18.432461] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.192 [2024-04-24 22:07:18.444464] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.192 [2024-04-24 22:07:18.444495] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.451 [2024-04-24 22:07:18.458532] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.451 [2024-04-24 22:07:18.458573] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.451 [2024-04-24 22:07:18.470392] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.451 [2024-04-24 22:07:18.470441] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.451 [2024-04-24 22:07:18.483085] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.451 [2024-04-24 22:07:18.483128] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.451 [2024-04-24 22:07:18.495407] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.451 [2024-04-24 22:07:18.495437] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.451 [2024-04-24 22:07:18.508288] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.451 [2024-04-24 22:07:18.508318] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.451 [2024-04-24 22:07:18.520285] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.451 [2024-04-24 22:07:18.520315] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.451 [2024-04-24 22:07:18.532759] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.451 [2024-04-24 22:07:18.532790] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.451 [2024-04-24 22:07:18.544892] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.451 [2024-04-24 22:07:18.544922] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.451 [2024-04-24 22:07:18.556633] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.451 [2024-04-24 22:07:18.556664] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.451 [2024-04-24 22:07:18.568650] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.451 [2024-04-24 22:07:18.568682] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.451 [2024-04-24 22:07:18.581027] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.451 [2024-04-24 22:07:18.581058] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.451 [2024-04-24 22:07:18.593269] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.451 [2024-04-24 22:07:18.593299] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.451 [2024-04-24 22:07:18.605343] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.451 [2024-04-24 22:07:18.605373] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.451 [2024-04-24 22:07:18.617455] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.451 [2024-04-24 22:07:18.617486] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.451 [2024-04-24 22:07:18.629286] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.451 [2024-04-24 22:07:18.629318] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.452 [2024-04-24 22:07:18.641247] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.452 [2024-04-24 22:07:18.641278] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.452 [2024-04-24 22:07:18.653559] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.452 [2024-04-24 22:07:18.653590] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.452 [2024-04-24 22:07:18.665796] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.452 [2024-04-24 22:07:18.665827] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.452 [2024-04-24 22:07:18.678042] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.452 [2024-04-24 22:07:18.678072] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.452 [2024-04-24 22:07:18.689811] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.452 [2024-04-24 22:07:18.689841] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.452 [2024-04-24 22:07:18.701640] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.452 [2024-04-24 22:07:18.701670] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.710 [2024-04-24 22:07:18.713918] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.710 [2024-04-24 22:07:18.713949] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.710 [2024-04-24 22:07:18.726038] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.710 [2024-04-24 22:07:18.726069] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.710 [2024-04-24 22:07:18.738535] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.710 [2024-04-24 22:07:18.738566] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.710 [2024-04-24 22:07:18.750406] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.710 [2024-04-24 22:07:18.750436] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.710 [2024-04-24 22:07:18.762961] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.710 [2024-04-24 22:07:18.762991] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.710 [2024-04-24 22:07:18.775427] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.710 [2024-04-24 22:07:18.775457] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.710 [2024-04-24 22:07:18.787166] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.710 [2024-04-24 22:07:18.787196] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.710 [2024-04-24 22:07:18.799252] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.710 [2024-04-24 22:07:18.799283] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.710 [2024-04-24 22:07:18.811240] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.710 [2024-04-24 22:07:18.811270] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.710 [2024-04-24 22:07:18.823474] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.710 [2024-04-24 22:07:18.823504] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.710 [2024-04-24 22:07:18.835615] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.710 [2024-04-24 22:07:18.835646] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.710 [2024-04-24 22:07:18.847469] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.710 [2024-04-24 22:07:18.847500] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.710 [2024-04-24 22:07:18.859373] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.710 [2024-04-24 22:07:18.859415] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.710 [2024-04-24 22:07:18.871043] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.710 [2024-04-24 22:07:18.871072] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.710 [2024-04-24 22:07:18.883009] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.710 [2024-04-24 22:07:18.883040] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.710 [2024-04-24 22:07:18.894883] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.710 [2024-04-24 22:07:18.894915] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.710 [2024-04-24 22:07:18.906973] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.710 [2024-04-24 22:07:18.907003] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.710 [2024-04-24 22:07:18.919114] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.710 [2024-04-24 22:07:18.919144] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.710 [2024-04-24 22:07:18.931214] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.710 [2024-04-24 22:07:18.931245] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.710 [2024-04-24 22:07:18.943125] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.710 [2024-04-24 22:07:18.943156] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.710 [2024-04-24 22:07:18.955445] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.710 [2024-04-24 22:07:18.955486] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.970 [2024-04-24 22:07:18.967177] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.970 [2024-04-24 22:07:18.967208] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.970 [2024-04-24 22:07:18.979173] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.970 [2024-04-24 22:07:18.979204] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.970 [2024-04-24 22:07:18.991580] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.970 [2024-04-24 22:07:18.991611] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.970 [2024-04-24 22:07:19.003979] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.970 [2024-04-24 22:07:19.004010] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.970 [2024-04-24 22:07:19.015880] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.970 [2024-04-24 22:07:19.015910] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.970 [2024-04-24 22:07:19.027987] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.970 [2024-04-24 22:07:19.028017] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.970 [2024-04-24 22:07:19.040281] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.970 [2024-04-24 22:07:19.040311] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.970 [2024-04-24 22:07:19.052018] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.970 [2024-04-24 22:07:19.052049] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.970 [2024-04-24 22:07:19.064166] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.970 [2024-04-24 22:07:19.064198] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.970 [2024-04-24 22:07:19.076081] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.970 [2024-04-24 22:07:19.076112] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.970 [2024-04-24 22:07:19.088031] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.970 [2024-04-24 22:07:19.088062] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.970 [2024-04-24 22:07:19.099964] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.970 [2024-04-24 22:07:19.099994] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.970 [2024-04-24 22:07:19.111712] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.970 [2024-04-24 22:07:19.111745] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.970 [2024-04-24 22:07:19.123653] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.970 [2024-04-24 22:07:19.123682] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.970 [2024-04-24 22:07:19.135687] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.970 [2024-04-24 22:07:19.135718] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.970 [2024-04-24 22:07:19.147551] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.970 [2024-04-24 22:07:19.147593] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.970 [2024-04-24 22:07:19.159982] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.970 [2024-04-24 22:07:19.160012] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.970 [2024-04-24 22:07:19.172083] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.970 [2024-04-24 22:07:19.172113] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.970 [2024-04-24 22:07:19.184028] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.970 [2024-04-24 22:07:19.184059] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.970 [2024-04-24 22:07:19.196059] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.970 [2024-04-24 22:07:19.196089] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.970 [2024-04-24 22:07:19.208205] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.970 [2024-04-24 22:07:19.208235] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.970 [2024-04-24 22:07:19.220297] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.970 [2024-04-24 22:07:19.220327] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.229 [2024-04-24 22:07:19.232894] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.229 [2024-04-24 22:07:19.232927] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.229 [2024-04-24 22:07:19.245380] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.229 [2024-04-24 22:07:19.245422] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.229 [2024-04-24 22:07:19.257554] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.229 [2024-04-24 22:07:19.257595] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.229 [2024-04-24 22:07:19.269357] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.229 [2024-04-24 22:07:19.269389] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.229 [2024-04-24 22:07:19.281459] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.229 [2024-04-24 22:07:19.281495] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.229 [2024-04-24 22:07:19.293667] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.229 [2024-04-24 22:07:19.293697] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.229 [2024-04-24 22:07:19.306057] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.229 [2024-04-24 22:07:19.306087] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.229 [2024-04-24 22:07:19.318702] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.229 [2024-04-24 22:07:19.318732] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.229 [2024-04-24 22:07:19.331250] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.229 [2024-04-24 22:07:19.331280] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.230 [2024-04-24 22:07:19.343094] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.230 [2024-04-24 22:07:19.343123] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.230 [2024-04-24 22:07:19.355054] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.230 [2024-04-24 22:07:19.355084] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.230 [2024-04-24 22:07:19.368509] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.230 [2024-04-24 22:07:19.368539] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.230 [2024-04-24 22:07:19.379553] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.230 [2024-04-24 22:07:19.379583] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.230 [2024-04-24 22:07:19.391660] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.230 [2024-04-24 22:07:19.391690] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.230 [2024-04-24 22:07:19.403843] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.230 [2024-04-24 22:07:19.403874] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.230 [2024-04-24 22:07:19.415761] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.230 [2024-04-24 22:07:19.415792] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.230 [2024-04-24 22:07:19.428159] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.230 [2024-04-24 22:07:19.428198] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.230 [2024-04-24 22:07:19.440219] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.230 [2024-04-24 22:07:19.440250] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.230 [2024-04-24 22:07:19.452301] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.230 [2024-04-24 22:07:19.452331] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.230 [2024-04-24 22:07:19.463969] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.230 [2024-04-24 22:07:19.463999] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.230 [2024-04-24 22:07:19.475917] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.230 [2024-04-24 22:07:19.475947] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.489 [2024-04-24 22:07:19.488342] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.489 [2024-04-24 22:07:19.488374] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.489 [2024-04-24 22:07:19.500261] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.489 [2024-04-24 22:07:19.500293] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.489 [2024-04-24 22:07:19.512136] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.489 [2024-04-24 22:07:19.512167] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.489 [2024-04-24 22:07:19.524790] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.489 [2024-04-24 22:07:19.524822] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.489 [2024-04-24 22:07:19.536872] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.489 [2024-04-24 22:07:19.536902] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.489 [2024-04-24 22:07:19.549065] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.489 [2024-04-24 22:07:19.549096] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.489 [2024-04-24 22:07:19.560824] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.489 [2024-04-24 22:07:19.560854] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.489 [2024-04-24 22:07:19.573486] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.489 [2024-04-24 22:07:19.573517] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.489 [2024-04-24 22:07:19.586065] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.489 [2024-04-24 22:07:19.586096] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.489 [2024-04-24 22:07:19.598146] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.489 [2024-04-24 22:07:19.598189] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.489 [2024-04-24 22:07:19.611830] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.489 [2024-04-24 22:07:19.611861] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.489 [2024-04-24 22:07:19.623297] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.489 [2024-04-24 22:07:19.623327] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.489 [2024-04-24 22:07:19.634946] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.489 [2024-04-24 22:07:19.634977] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.489 [2024-04-24 22:07:19.647231] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.489 [2024-04-24 22:07:19.647261] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.489 [2024-04-24 22:07:19.659210] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.489 [2024-04-24 22:07:19.659250] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.489 [2024-04-24 22:07:19.671161] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.489 [2024-04-24 22:07:19.671191] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.489 [2024-04-24 22:07:19.682960] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.489 [2024-04-24 22:07:19.682991] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.489 [2024-04-24 22:07:19.694832] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.489 [2024-04-24 22:07:19.694863] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.489 [2024-04-24 22:07:19.706982] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.489 [2024-04-24 22:07:19.707013] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.489 [2024-04-24 22:07:19.718644] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.489 [2024-04-24 22:07:19.718675] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.489 [2024-04-24 22:07:19.730350] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.489 [2024-04-24 22:07:19.730381] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.489 [2024-04-24 22:07:19.742193] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.489 [2024-04-24 22:07:19.742223] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.748 [2024-04-24 22:07:19.754109] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.748 [2024-04-24 22:07:19.754139] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.748 [2024-04-24 22:07:19.766012] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.748 [2024-04-24 22:07:19.766044] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.748 [2024-04-24 22:07:19.777930] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.748 [2024-04-24 22:07:19.777961] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.748 [2024-04-24 22:07:19.790251] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.748 [2024-04-24 22:07:19.790283] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.748 [2024-04-24 22:07:19.802783] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.748 [2024-04-24 22:07:19.802814] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.748 [2024-04-24 22:07:19.814565] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.748 [2024-04-24 22:07:19.814596] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.748 [2024-04-24 22:07:19.826147] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.748 [2024-04-24 22:07:19.826177] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.748 [2024-04-24 22:07:19.837463] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.748 [2024-04-24 22:07:19.837496] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.748 [2024-04-24 22:07:19.849147] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.748 [2024-04-24 22:07:19.849177] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.748 [2024-04-24 22:07:19.860845] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.748 [2024-04-24 22:07:19.860876] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.748 [2024-04-24 22:07:19.872250] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.748 [2024-04-24 22:07:19.872280] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.748 [2024-04-24 22:07:19.884193] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.748 [2024-04-24 22:07:19.884233] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.748 [2024-04-24 22:07:19.896371] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.748 [2024-04-24 22:07:19.896418] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.748 [2024-04-24 22:07:19.907501] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.748 [2024-04-24 22:07:19.907532] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.748 [2024-04-24 22:07:19.919267] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.748 [2024-04-24 22:07:19.919298] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.748 [2024-04-24 22:07:19.931260] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.748 [2024-04-24 22:07:19.931291] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.748 [2024-04-24 22:07:19.943453] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.748 [2024-04-24 22:07:19.943483] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.749 [2024-04-24 22:07:19.955415] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.749 [2024-04-24 22:07:19.955446] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.749 [2024-04-24 22:07:19.967430] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.749 [2024-04-24 22:07:19.967462] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.749 [2024-04-24 22:07:19.979125] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.749 [2024-04-24 22:07:19.979156] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.749 [2024-04-24 22:07:19.991325] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.749 [2024-04-24 22:07:19.991356] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.749 [2024-04-24 22:07:20.003292] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.749 [2024-04-24 22:07:20.003325] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.007 [2024-04-24 22:07:20.015186] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.007 [2024-04-24 22:07:20.015216] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.007 [2024-04-24 22:07:20.028071] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.007 [2024-04-24 22:07:20.028103] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.007 [2024-04-24 22:07:20.039795] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.007 [2024-04-24 22:07:20.039827] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.007 [2024-04-24 22:07:20.051800] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.007 [2024-04-24 22:07:20.051831] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.007 [2024-04-24 22:07:20.063631] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.007 [2024-04-24 22:07:20.063662] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.007 [2024-04-24 22:07:20.075462] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.007 [2024-04-24 22:07:20.075492] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.007 [2024-04-24 22:07:20.088990] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.007 [2024-04-24 22:07:20.089020] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.007 [2024-04-24 22:07:20.099750] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.007 [2024-04-24 22:07:20.099790] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.007 [2024-04-24 22:07:20.112426] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.007 [2024-04-24 22:07:20.112473] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.007 [2024-04-24 22:07:20.124302] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.007 [2024-04-24 22:07:20.124333] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.007 [2024-04-24 22:07:20.136219] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.007 [2024-04-24 22:07:20.136249] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.007 [2024-04-24 22:07:20.150254] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.007 [2024-04-24 22:07:20.150294] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.007 [2024-04-24 22:07:20.161306] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.007 [2024-04-24 22:07:20.161338] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.007 [2024-04-24 22:07:20.173273] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.007 [2024-04-24 22:07:20.173304] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.007 [2024-04-24 22:07:20.185533] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.007 [2024-04-24 22:07:20.185564] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.007 [2024-04-24 22:07:20.197509] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.007 [2024-04-24 22:07:20.197541] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.007 [2024-04-24 22:07:20.209833] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.007 [2024-04-24 22:07:20.209869] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.007 [2024-04-24 22:07:20.221876] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.007 [2024-04-24 22:07:20.221906] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.007 [2024-04-24 22:07:20.234057] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.007 [2024-04-24 22:07:20.234088] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.007 [2024-04-24 22:07:20.245946] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.007 [2024-04-24 22:07:20.245976] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.007 [2024-04-24 22:07:20.258116] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.008 [2024-04-24 22:07:20.258146] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.266 [2024-04-24 22:07:20.269885] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.266 [2024-04-24 22:07:20.269916] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.266 [2024-04-24 22:07:20.281770] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.266 [2024-04-24 22:07:20.281800] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.266 [2024-04-24 22:07:20.293685] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.266 [2024-04-24 22:07:20.293716] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.266 [2024-04-24 22:07:20.305828] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.266 [2024-04-24 22:07:20.305858] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.266 [2024-04-24 22:07:20.318032] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.266 [2024-04-24 22:07:20.318063] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.266 [2024-04-24 22:07:20.330045] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.266 [2024-04-24 22:07:20.330076] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.266 [2024-04-24 22:07:20.342344] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.266 [2024-04-24 22:07:20.342375] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.266 [2024-04-24 22:07:20.354544] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.266 [2024-04-24 22:07:20.354575] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.266 [2024-04-24 22:07:20.366651] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.266 [2024-04-24 22:07:20.366681] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.266 [2024-04-24 22:07:20.378626] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.266 [2024-04-24 22:07:20.378656] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.266 [2024-04-24 22:07:20.390654] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.266 [2024-04-24 22:07:20.390685] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.266 [2024-04-24 22:07:20.402986] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.266 [2024-04-24 22:07:20.403017] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.266 [2024-04-24 22:07:20.415082] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.266 [2024-04-24 22:07:20.415113] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.266 [2024-04-24 22:07:20.427530] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.266 [2024-04-24 22:07:20.427560] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.266 [2024-04-24 22:07:20.439646] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.266 [2024-04-24 22:07:20.439676] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.266 [2024-04-24 22:07:20.451673] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.266 [2024-04-24 22:07:20.451703] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.266 [2024-04-24 22:07:20.463575] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.266 [2024-04-24 22:07:20.463605] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.266 [2024-04-24 22:07:20.475680] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.266 [2024-04-24 22:07:20.475710] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.266 [2024-04-24 22:07:20.487731] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.266 [2024-04-24 22:07:20.487761] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.266 [2024-04-24 22:07:20.499539] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.266 [2024-04-24 22:07:20.499570] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.266 [2024-04-24 22:07:20.511629] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.267 [2024-04-24 22:07:20.511660] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.525 [2024-04-24 22:07:20.523796] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.525 [2024-04-24 22:07:20.523827] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.525 [2024-04-24 22:07:20.536381] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.525 [2024-04-24 22:07:20.536420] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.525 [2024-04-24 22:07:20.548827] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.525 [2024-04-24 22:07:20.548857] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.525 [2024-04-24 22:07:20.560798] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.525 [2024-04-24 22:07:20.560828] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.525 [2024-04-24 22:07:20.572968] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.525 [2024-04-24 22:07:20.572998] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.525 [2024-04-24 22:07:20.585211] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.525 [2024-04-24 22:07:20.585242] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.525 [2024-04-24 22:07:20.597109] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.525 [2024-04-24 22:07:20.597140] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.525 [2024-04-24 22:07:20.608783] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.525 [2024-04-24 22:07:20.608815] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.525 [2024-04-24 22:07:20.620474] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.525 [2024-04-24 22:07:20.620504] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.525 [2024-04-24 22:07:20.632705] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.525 [2024-04-24 22:07:20.632735] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.525 [2024-04-24 22:07:20.644468] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.525 [2024-04-24 22:07:20.644499] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.525 [2024-04-24 22:07:20.658371] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.525 [2024-04-24 22:07:20.658409] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.525 [2024-04-24 22:07:20.670433] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.525 [2024-04-24 22:07:20.670464] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.525 [2024-04-24 22:07:20.682148] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.525 [2024-04-24 22:07:20.682178] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.525 [2024-04-24 22:07:20.694245] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.525 [2024-04-24 22:07:20.694279] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.525 [2024-04-24 22:07:20.706414] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.525 [2024-04-24 22:07:20.706445] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.525 [2024-04-24 22:07:20.718571] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.525 [2024-04-24 22:07:20.718601] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.525 [2024-04-24 22:07:20.730510] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.525 [2024-04-24 22:07:20.730540] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.525 [2024-04-24 22:07:20.742661] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.525 [2024-04-24 22:07:20.742692] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.525 [2024-04-24 22:07:20.754674] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.525 [2024-04-24 22:07:20.754704] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.525 [2024-04-24 22:07:20.767635] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.525 [2024-04-24 22:07:20.767665] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.525 [2024-04-24 22:07:20.779755] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.525 [2024-04-24 22:07:20.779785] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.784 [2024-04-24 22:07:20.791897] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.784 [2024-04-24 22:07:20.791928] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.784 [2024-04-24 22:07:20.804230] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.784 [2024-04-24 22:07:20.804261] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.784 [2024-04-24 22:07:20.815794] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.784 [2024-04-24 22:07:20.815825] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.784 [2024-04-24 22:07:20.827506] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.784 [2024-04-24 22:07:20.827538] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.784 [2024-04-24 22:07:20.839404] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.784 [2024-04-24 22:07:20.839442] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.784 [2024-04-24 22:07:20.851628] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.784 [2024-04-24 22:07:20.851659] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.784 [2024-04-24 22:07:20.863603] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.784 [2024-04-24 22:07:20.863635] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.785 [2024-04-24 22:07:20.875755] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.785 [2024-04-24 22:07:20.875786] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.785 [2024-04-24 22:07:20.887929] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.785 [2024-04-24 22:07:20.887961] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.785 [2024-04-24 22:07:20.900118] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.785 [2024-04-24 22:07:20.900148] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.785 [2024-04-24 22:07:20.912112] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.785 [2024-04-24 22:07:20.912142] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.785 [2024-04-24 22:07:20.923911] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.785 [2024-04-24 22:07:20.923942] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.785 [2024-04-24 22:07:20.936131] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.785 [2024-04-24 22:07:20.936162] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.785 [2024-04-24 22:07:20.948460] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.785 [2024-04-24 22:07:20.948490] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.785 [2024-04-24 22:07:20.959921] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.785 [2024-04-24 22:07:20.959951] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.785 [2024-04-24 22:07:20.972169] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.785 [2024-04-24 22:07:20.972199] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.785 [2024-04-24 22:07:20.984086] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.785 [2024-04-24 22:07:20.984116] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.785 [2024-04-24 22:07:20.995820] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.785 [2024-04-24 22:07:20.995850] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.785 [2024-04-24 22:07:21.008125] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.785 [2024-04-24 22:07:21.008155] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.785 [2024-04-24 22:07:21.020640] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.785 [2024-04-24 22:07:21.020670] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.785 [2024-04-24 22:07:21.032631] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.785 [2024-04-24 22:07:21.032661] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.043 [2024-04-24 22:07:21.044567] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.043 [2024-04-24 22:07:21.044598] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.043 [2024-04-24 22:07:21.056583] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.043 [2024-04-24 22:07:21.056614] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.043 [2024-04-24 22:07:21.068503] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.043 [2024-04-24 22:07:21.068534] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.043 [2024-04-24 22:07:21.080636] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.043 [2024-04-24 22:07:21.080667] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.043 [2024-04-24 22:07:21.092830] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.043 [2024-04-24 22:07:21.092861] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.043 [2024-04-24 22:07:21.104751] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.043 [2024-04-24 22:07:21.104782] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.043 [2024-04-24 22:07:21.116798] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.043 [2024-04-24 22:07:21.116829] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.043 [2024-04-24 22:07:21.129368] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.043 [2024-04-24 22:07:21.129408] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.044 [2024-04-24 22:07:21.141391] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.044 [2024-04-24 22:07:21.141431] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.044 [2024-04-24 22:07:21.153126] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.044 [2024-04-24 22:07:21.153157] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.044 [2024-04-24 22:07:21.165442] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.044 [2024-04-24 22:07:21.165473] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.044 [2024-04-24 22:07:21.177594] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.044 [2024-04-24 22:07:21.177625] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.044 [2024-04-24 22:07:21.189468] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.044 [2024-04-24 22:07:21.189499] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.044 [2024-04-24 22:07:21.201242] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.044 [2024-04-24 22:07:21.201272] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.044 [2024-04-24 22:07:21.213451] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.044 [2024-04-24 22:07:21.213481] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.044 [2024-04-24 22:07:21.229880] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.044 [2024-04-24 22:07:21.229911] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.044 [2024-04-24 22:07:21.241718] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.044 [2024-04-24 22:07:21.241748] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.044 [2024-04-24 22:07:21.254012] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.044 [2024-04-24 22:07:21.254062] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.044 [2024-04-24 22:07:21.266454] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.044 [2024-04-24 22:07:21.266485] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.044 [2024-04-24 22:07:21.278631] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.044 [2024-04-24 22:07:21.278662] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.044 [2024-04-24 22:07:21.290647] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.044 [2024-04-24 22:07:21.290677] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.302 [2024-04-24 22:07:21.302667] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.302 [2024-04-24 22:07:21.302698] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.302 [2024-04-24 22:07:21.314682] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.302 [2024-04-24 22:07:21.314713] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.302 [2024-04-24 22:07:21.326944] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.302 [2024-04-24 22:07:21.326975] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.302 [2024-04-24 22:07:21.339421] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.302 [2024-04-24 22:07:21.339451] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.302 [2024-04-24 22:07:21.351836] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.302 [2024-04-24 22:07:21.351867] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.302 [2024-04-24 22:07:21.364671] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.302 [2024-04-24 22:07:21.364701] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.302 [2024-04-24 22:07:21.376475] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.302 [2024-04-24 22:07:21.376506] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.302 [2024-04-24 22:07:21.388574] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.302 [2024-04-24 22:07:21.388606] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.302 [2024-04-24 22:07:21.400573] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.302 [2024-04-24 22:07:21.400604] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.302 [2024-04-24 22:07:21.412556] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.302 [2024-04-24 22:07:21.412586] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.302 [2024-04-24 22:07:21.424542] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.302 [2024-04-24 22:07:21.424573] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.303 [2024-04-24 22:07:21.436897] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.303 [2024-04-24 22:07:21.436929] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.303 [2024-04-24 22:07:21.449062] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.303 [2024-04-24 22:07:21.449092] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.303 [2024-04-24 22:07:21.460740] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.303 [2024-04-24 22:07:21.460771] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.303 [2024-04-24 22:07:21.472656] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.303 [2024-04-24 22:07:21.472687] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.303 [2024-04-24 22:07:21.484926] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.303 [2024-04-24 22:07:21.484965] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.303 [2024-04-24 22:07:21.496741] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.303 [2024-04-24 22:07:21.496772] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.303 [2024-04-24 22:07:21.509059] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.303 [2024-04-24 22:07:21.509095] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.303 [2024-04-24 22:07:21.521407] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.303 [2024-04-24 22:07:21.521438] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.303 [2024-04-24 22:07:21.533642] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.303 [2024-04-24 22:07:21.533673] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.303 [2024-04-24 22:07:21.545951] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.303 [2024-04-24 22:07:21.545981] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.561 [2024-04-24 22:07:21.558129] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.561 [2024-04-24 22:07:21.558159] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.561 [2024-04-24 22:07:21.569929] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.561 [2024-04-24 22:07:21.569960] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.561 [2024-04-24 22:07:21.581921] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.561 [2024-04-24 22:07:21.581951] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.561 [2024-04-24 22:07:21.594706] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.561 [2024-04-24 22:07:21.594737] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.561 [2024-04-24 22:07:21.606971] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.561 [2024-04-24 22:07:21.607002] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.561 [2024-04-24 22:07:21.618959] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.561 [2024-04-24 22:07:21.618989] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.561 [2024-04-24 22:07:21.631853] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.561 [2024-04-24 22:07:21.631884] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.561 [2024-04-24 22:07:21.643879] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.561 [2024-04-24 22:07:21.643909] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.561 [2024-04-24 22:07:21.650458] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.561 [2024-04-24 22:07:21.650488] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.561 00:15:39.561 Latency(us) 00:15:39.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.561 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:39.561 Nvme1n1 : 5.01 10537.68 82.33 0.00 0.00 12128.92 5509.88 24272.59 00:15:39.562 =================================================================================================================== 00:15:39.562 Total : 10537.68 82.33 0.00 0.00 12128.92 5509.88 24272.59 00:15:39.562 [2024-04-24 22:07:21.658503] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.562 [2024-04-24 22:07:21.658531] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.562 [2024-04-24 22:07:21.666521] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.562 [2024-04-24 22:07:21.666559] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.562 [2024-04-24 22:07:21.674546] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.562 [2024-04-24 22:07:21.674576] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.562 [2024-04-24 22:07:21.682612] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.562 [2024-04-24 22:07:21.682659] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.562 [2024-04-24 22:07:21.690638] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.562 [2024-04-24 22:07:21.690684] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.562 [2024-04-24 22:07:21.698656] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.562 [2024-04-24 22:07:21.698704] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.562 [2024-04-24 22:07:21.706668] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.562 [2024-04-24 22:07:21.706712] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.562 [2024-04-24 22:07:21.714691] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.562 [2024-04-24 22:07:21.714740] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.562 [2024-04-24 22:07:21.722717] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.562 [2024-04-24 22:07:21.722763] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.562 [2024-04-24 22:07:21.730737] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.562 [2024-04-24 22:07:21.730780] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.562 [2024-04-24 22:07:21.738754] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.562 [2024-04-24 22:07:21.738801] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.562 [2024-04-24 22:07:21.746779] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.562 [2024-04-24 22:07:21.746824] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.562 [2024-04-24 22:07:21.754805] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.562 [2024-04-24 22:07:21.754855] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.562 [2024-04-24 22:07:21.762827] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.562 [2024-04-24 22:07:21.762876] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.562 [2024-04-24 22:07:21.770842] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.562 [2024-04-24 22:07:21.770886] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.562 [2024-04-24 22:07:21.778864] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.562 [2024-04-24 22:07:21.778912] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.562 [2024-04-24 22:07:21.786885] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.562 [2024-04-24 22:07:21.786930] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.562 [2024-04-24 22:07:21.794912] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.562 [2024-04-24 22:07:21.794957] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.562 [2024-04-24 22:07:21.802911] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.562 [2024-04-24 22:07:21.802947] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.562 [2024-04-24 22:07:21.810909] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.562 [2024-04-24 22:07:21.810936] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.820 [2024-04-24 22:07:21.818932] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.820 [2024-04-24 22:07:21.818968] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.820 [2024-04-24 22:07:21.826954] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.820 [2024-04-24 22:07:21.826980] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.820 [2024-04-24 22:07:21.834977] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.820 [2024-04-24 22:07:21.835002] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.820 [2024-04-24 22:07:21.843040] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.820 [2024-04-24 22:07:21.843082] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.820 [2024-04-24 22:07:21.851069] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.820 [2024-04-24 22:07:21.851120] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.820 [2024-04-24 22:07:21.859084] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.820 [2024-04-24 22:07:21.859129] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.820 [2024-04-24 22:07:21.867068] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.820 [2024-04-24 22:07:21.867093] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.820 [2024-04-24 22:07:21.875090] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.820 [2024-04-24 22:07:21.875115] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.820 [2024-04-24 22:07:21.883115] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.820 [2024-04-24 22:07:21.883142] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.820 [2024-04-24 22:07:21.891136] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.820 [2024-04-24 22:07:21.891162] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.820 [2024-04-24 22:07:21.899166] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.820 [2024-04-24 22:07:21.899196] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.820 [2024-04-24 22:07:21.907223] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.820 [2024-04-24 22:07:21.907270] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.820 [2024-04-24 22:07:21.915240] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.820 [2024-04-24 22:07:21.915286] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.820 [2024-04-24 22:07:21.923229] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.820 [2024-04-24 22:07:21.923255] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.820 [2024-04-24 22:07:21.931251] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.821 [2024-04-24 22:07:21.931276] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.821 [2024-04-24 22:07:21.939273] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.821 [2024-04-24 22:07:21.939299] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3939310) - No such process 00:15:39.821 22:07:21 -- target/zcopy.sh@49 -- # wait 3939310 00:15:39.821 22:07:21 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:39.821 22:07:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:39.821 22:07:21 -- common/autotest_common.sh@10 -- # set +x 00:15:39.821 22:07:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:39.821 22:07:21 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:39.821 22:07:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:39.821 22:07:21 -- common/autotest_common.sh@10 -- # set +x 00:15:39.821 delay0 00:15:39.821 22:07:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:39.821 22:07:21 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:39.821 22:07:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:39.821 22:07:21 -- common/autotest_common.sh@10 -- # set +x 00:15:39.821 22:07:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:39.821 22:07:21 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:39.821 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.821 [2024-04-24 22:07:22.020856] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:46.378 Initializing NVMe Controllers 00:15:46.378 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:46.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:46.378 Initialization complete. Launching workers. 00:15:46.378 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 114 00:15:46.378 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 387, failed to submit 47 00:15:46.378 success 213, unsuccess 174, failed 0 00:15:46.378 22:07:28 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:46.378 22:07:28 -- target/zcopy.sh@60 -- # nvmftestfini 00:15:46.378 22:07:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:46.378 22:07:28 -- nvmf/common.sh@117 -- # sync 00:15:46.378 22:07:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:46.378 22:07:28 -- nvmf/common.sh@120 -- # set +e 00:15:46.378 22:07:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:46.378 22:07:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:46.378 rmmod nvme_tcp 00:15:46.378 rmmod nvme_fabrics 00:15:46.378 rmmod nvme_keyring 00:15:46.378 22:07:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:46.378 22:07:28 -- nvmf/common.sh@124 -- # set -e 00:15:46.378 22:07:28 -- nvmf/common.sh@125 -- # return 0 00:15:46.378 22:07:28 -- nvmf/common.sh@478 -- # '[' -n 3937975 ']' 00:15:46.378 22:07:28 -- nvmf/common.sh@479 -- # killprocess 3937975 00:15:46.378 22:07:28 -- common/autotest_common.sh@936 -- # '[' -z 3937975 ']' 00:15:46.378 22:07:28 -- common/autotest_common.sh@940 -- # kill -0 3937975 00:15:46.378 22:07:28 -- common/autotest_common.sh@941 -- # uname 00:15:46.378 22:07:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:46.378 22:07:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3937975 00:15:46.378 22:07:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:46.378 22:07:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:46.378 22:07:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3937975' 00:15:46.378 killing process with pid 3937975 00:15:46.378 22:07:28 -- common/autotest_common.sh@955 -- # kill 3937975 00:15:46.378 [2024-04-24 22:07:28.234758] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:46.378 22:07:28 -- common/autotest_common.sh@960 -- # wait 3937975 00:15:46.378 22:07:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:46.378 22:07:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:46.378 22:07:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:46.378 22:07:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:46.378 22:07:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:46.378 22:07:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.378 22:07:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.378 22:07:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.915 22:07:30 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:48.915 00:15:48.915 real 0m28.469s 00:15:48.915 user 0m41.070s 00:15:48.915 sys 0m9.285s 00:15:48.915 22:07:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:48.915 22:07:30 -- common/autotest_common.sh@10 -- # set +x 00:15:48.915 ************************************ 00:15:48.915 END TEST nvmf_zcopy 00:15:48.915 ************************************ 00:15:48.915 22:07:30 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:48.915 22:07:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:48.915 22:07:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:48.915 22:07:30 -- common/autotest_common.sh@10 -- # set +x 00:15:48.915 ************************************ 00:15:48.915 START TEST nvmf_nmic 00:15:48.915 ************************************ 00:15:48.915 22:07:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:48.915 * Looking for test storage... 00:15:48.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:48.915 22:07:30 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:48.915 22:07:30 -- nvmf/common.sh@7 -- # uname -s 00:15:48.915 22:07:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.915 22:07:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.915 22:07:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.915 22:07:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.915 22:07:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.915 22:07:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.915 22:07:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.915 22:07:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.915 22:07:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.915 22:07:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.915 22:07:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:48.915 22:07:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:48.915 22:07:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.915 22:07:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.915 22:07:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:48.915 22:07:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.915 22:07:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:48.915 22:07:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.915 22:07:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.915 22:07:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.915 22:07:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.915 22:07:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.916 22:07:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.916 22:07:30 -- paths/export.sh@5 -- # export PATH 00:15:48.916 22:07:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.916 22:07:30 -- nvmf/common.sh@47 -- # : 0 00:15:48.916 22:07:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:48.916 22:07:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:48.916 22:07:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.916 22:07:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.916 22:07:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.916 22:07:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:48.916 22:07:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:48.916 22:07:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:48.916 22:07:30 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:48.916 22:07:30 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:48.916 22:07:30 -- target/nmic.sh@14 -- # nvmftestinit 00:15:48.916 22:07:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:48.916 22:07:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.916 22:07:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:48.916 22:07:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:48.916 22:07:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:48.916 22:07:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.916 22:07:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.916 22:07:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.916 22:07:30 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:48.916 22:07:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:48.916 22:07:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:48.916 22:07:30 -- common/autotest_common.sh@10 -- # set +x 00:15:51.444 22:07:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:51.444 22:07:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:51.444 22:07:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:51.444 22:07:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:51.444 22:07:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:51.444 22:07:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:51.444 22:07:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:51.444 22:07:33 -- nvmf/common.sh@295 -- # net_devs=() 00:15:51.444 22:07:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:51.444 22:07:33 -- nvmf/common.sh@296 -- # e810=() 00:15:51.444 22:07:33 -- nvmf/common.sh@296 -- # local -ga e810 00:15:51.444 22:07:33 -- nvmf/common.sh@297 -- # x722=() 00:15:51.444 22:07:33 -- nvmf/common.sh@297 -- # local -ga x722 00:15:51.444 22:07:33 -- nvmf/common.sh@298 -- # mlx=() 00:15:51.444 22:07:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:51.444 22:07:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:51.444 22:07:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:51.444 22:07:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:51.444 22:07:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:51.444 22:07:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:51.444 22:07:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:51.444 22:07:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:51.444 22:07:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:51.444 22:07:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:51.444 22:07:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:51.444 22:07:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:51.444 22:07:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:51.444 22:07:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:51.444 22:07:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:51.444 22:07:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:51.444 22:07:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:51.444 22:07:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:51.444 22:07:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:51.444 22:07:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:51.444 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:51.444 22:07:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:51.444 22:07:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:51.444 22:07:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.444 22:07:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.444 22:07:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:51.444 22:07:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:51.444 22:07:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:51.444 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:51.444 22:07:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:51.444 22:07:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:51.444 22:07:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.444 22:07:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.444 22:07:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:51.444 22:07:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:51.444 22:07:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:51.444 22:07:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:51.444 22:07:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:51.444 22:07:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.444 22:07:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:51.444 22:07:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.444 22:07:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:51.444 Found net devices under 0000:84:00.0: cvl_0_0 00:15:51.444 22:07:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.444 22:07:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:51.444 22:07:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.444 22:07:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:51.444 22:07:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.444 22:07:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:51.444 Found net devices under 0000:84:00.1: cvl_0_1 00:15:51.444 22:07:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.444 22:07:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:51.444 22:07:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:51.444 22:07:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:51.444 22:07:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:51.444 22:07:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:51.444 22:07:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:51.444 22:07:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:51.444 22:07:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:51.444 22:07:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:51.444 22:07:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:51.444 22:07:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:51.445 22:07:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:51.445 22:07:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:51.445 22:07:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:51.445 22:07:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:51.445 22:07:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:51.445 22:07:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:51.445 22:07:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:51.445 22:07:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:51.445 22:07:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:51.445 22:07:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:51.445 22:07:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:51.445 22:07:33 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:51.445 22:07:33 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:51.445 22:07:33 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:51.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:15:51.445 00:15:51.445 --- 10.0.0.2 ping statistics --- 00:15:51.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.445 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:15:51.445 22:07:33 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:51.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:15:51.445 00:15:51.445 --- 10.0.0.1 ping statistics --- 00:15:51.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.445 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:15:51.445 22:07:33 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.445 22:07:33 -- nvmf/common.sh@411 -- # return 0 00:15:51.445 22:07:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:51.445 22:07:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.445 22:07:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:51.445 22:07:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:51.445 22:07:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.445 22:07:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:51.445 22:07:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:51.445 22:07:33 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:51.445 22:07:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:51.445 22:07:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:51.445 22:07:33 -- common/autotest_common.sh@10 -- # set +x 00:15:51.445 22:07:33 -- nvmf/common.sh@470 -- # nvmfpid=3942715 00:15:51.445 22:07:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:51.445 22:07:33 -- nvmf/common.sh@471 -- # waitforlisten 3942715 00:15:51.445 22:07:33 -- common/autotest_common.sh@817 -- # '[' -z 3942715 ']' 00:15:51.445 22:07:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.445 22:07:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:51.445 22:07:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.445 22:07:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:51.445 22:07:33 -- common/autotest_common.sh@10 -- # set +x 00:15:51.445 [2024-04-24 22:07:33.349654] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:15:51.445 [2024-04-24 22:07:33.349837] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.445 EAL: No free 2048 kB hugepages reported on node 1 00:15:51.445 [2024-04-24 22:07:33.457587] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:51.445 [2024-04-24 22:07:33.579604] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.445 [2024-04-24 22:07:33.579677] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.445 [2024-04-24 22:07:33.579694] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:51.445 [2024-04-24 22:07:33.579708] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:51.445 [2024-04-24 22:07:33.579719] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.445 [2024-04-24 22:07:33.579826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.445 [2024-04-24 22:07:33.579891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.445 [2024-04-24 22:07:33.579944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:51.445 [2024-04-24 22:07:33.579947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.703 22:07:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:51.703 22:07:33 -- common/autotest_common.sh@850 -- # return 0 00:15:51.703 22:07:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:51.703 22:07:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:51.703 22:07:33 -- common/autotest_common.sh@10 -- # set +x 00:15:51.703 22:07:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.703 22:07:33 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:51.703 22:07:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:51.703 22:07:33 -- common/autotest_common.sh@10 -- # set +x 00:15:51.703 [2024-04-24 22:07:33.742386] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.703 22:07:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:51.703 22:07:33 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:51.703 22:07:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:51.703 22:07:33 -- common/autotest_common.sh@10 -- # set +x 00:15:51.703 Malloc0 00:15:51.703 22:07:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:51.703 22:07:33 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:51.703 22:07:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:51.703 22:07:33 -- common/autotest_common.sh@10 -- # set +x 00:15:51.703 22:07:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:51.703 22:07:33 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:51.703 22:07:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:51.703 22:07:33 -- common/autotest_common.sh@10 -- # set +x 00:15:51.703 22:07:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:51.703 22:07:33 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:51.703 22:07:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:51.703 22:07:33 -- common/autotest_common.sh@10 -- # set +x 00:15:51.703 [2024-04-24 22:07:33.796685] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:51.703 [2024-04-24 22:07:33.797019] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:51.703 22:07:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:51.703 22:07:33 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:51.703 test case1: single bdev can't be used in multiple subsystems 00:15:51.703 22:07:33 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:51.703 22:07:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:51.703 22:07:33 -- common/autotest_common.sh@10 -- # set +x 00:15:51.703 22:07:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:51.703 22:07:33 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:51.703 22:07:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:51.703 22:07:33 -- common/autotest_common.sh@10 -- # set +x 00:15:51.703 22:07:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:51.703 22:07:33 -- target/nmic.sh@28 -- # nmic_status=0 00:15:51.703 22:07:33 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:51.703 22:07:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:51.703 22:07:33 -- common/autotest_common.sh@10 -- # set +x 00:15:51.703 [2024-04-24 22:07:33.820807] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:51.703 [2024-04-24 22:07:33.820840] subsystem.c:1930:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:51.703 [2024-04-24 22:07:33.820857] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.703 request: 00:15:51.703 { 00:15:51.703 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:51.703 "namespace": { 00:15:51.703 "bdev_name": "Malloc0", 00:15:51.703 "no_auto_visible": false 00:15:51.703 }, 00:15:51.703 "method": "nvmf_subsystem_add_ns", 00:15:51.703 "req_id": 1 00:15:51.703 } 00:15:51.703 Got JSON-RPC error response 00:15:51.703 response: 00:15:51.703 { 00:15:51.703 "code": -32602, 00:15:51.703 "message": "Invalid parameters" 00:15:51.703 } 00:15:51.703 22:07:33 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:15:51.703 22:07:33 -- target/nmic.sh@29 -- # nmic_status=1 00:15:51.703 22:07:33 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:51.703 22:07:33 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:51.703 Adding namespace failed - expected result. 00:15:51.703 22:07:33 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:51.703 test case2: host connect to nvmf target in multiple paths 00:15:51.703 22:07:33 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:51.703 22:07:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:51.703 22:07:33 -- common/autotest_common.sh@10 -- # set +x 00:15:51.703 [2024-04-24 22:07:33.828937] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:51.703 22:07:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:51.703 22:07:33 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:52.269 22:07:34 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:52.834 22:07:35 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:52.834 22:07:35 -- common/autotest_common.sh@1184 -- # local i=0 00:15:52.834 22:07:35 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:52.834 22:07:35 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:15:52.834 22:07:35 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:55.360 22:07:37 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:55.360 22:07:37 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:55.360 22:07:37 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:55.360 22:07:37 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:55.360 22:07:37 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:55.360 22:07:37 -- common/autotest_common.sh@1194 -- # return 0 00:15:55.360 22:07:37 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:55.360 [global] 00:15:55.360 thread=1 00:15:55.360 invalidate=1 00:15:55.360 rw=write 00:15:55.360 time_based=1 00:15:55.360 runtime=1 00:15:55.360 ioengine=libaio 00:15:55.360 direct=1 00:15:55.360 bs=4096 00:15:55.360 iodepth=1 00:15:55.360 norandommap=0 00:15:55.360 numjobs=1 00:15:55.360 00:15:55.360 verify_dump=1 00:15:55.360 verify_backlog=512 00:15:55.360 verify_state_save=0 00:15:55.360 do_verify=1 00:15:55.360 verify=crc32c-intel 00:15:55.360 [job0] 00:15:55.360 filename=/dev/nvme0n1 00:15:55.360 Could not set queue depth (nvme0n1) 00:15:55.360 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:55.360 fio-3.35 00:15:55.360 Starting 1 thread 00:15:56.292 00:15:56.292 job0: (groupid=0, jobs=1): err= 0: pid=3943342: Wed Apr 24 22:07:38 2024 00:15:56.292 read: IOPS=1384, BW=5538KiB/s (5671kB/s)(5544KiB/1001msec) 00:15:56.292 slat (nsec): min=6602, max=31480, avg=7721.38, stdev=1835.72 00:15:56.292 clat (usec): min=254, max=41026, avg=466.24, stdev=2670.42 00:15:56.292 lat (usec): min=261, max=41041, avg=473.96, stdev=2670.91 00:15:56.292 clat percentiles (usec): 00:15:56.292 | 1.00th=[ 260], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 277], 00:15:56.292 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 293], 00:15:56.292 | 70.00th=[ 297], 80.00th=[ 302], 90.00th=[ 314], 95.00th=[ 318], 00:15:56.292 | 99.00th=[ 347], 99.50th=[ 1663], 99.90th=[41157], 99.95th=[41157], 00:15:56.292 | 99.99th=[41157] 00:15:56.292 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:56.292 slat (nsec): min=8596, max=35414, avg=10088.89, stdev=2712.89 00:15:56.292 clat (usec): min=177, max=345, avg=208.42, stdev=22.19 00:15:56.292 lat (usec): min=187, max=355, avg=218.51, stdev=22.83 00:15:56.292 clat percentiles (usec): 00:15:56.292 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 192], 00:15:56.292 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 202], 60.00th=[ 206], 00:15:56.292 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 233], 95.00th=[ 247], 00:15:56.292 | 99.00th=[ 302], 99.50th=[ 318], 99.90th=[ 343], 99.95th=[ 347], 00:15:56.292 | 99.99th=[ 347] 00:15:56.292 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:15:56.292 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:56.292 lat (usec) : 250=50.21%, 500=49.52% 00:15:56.292 lat (msec) : 2=0.07%, 50=0.21% 00:15:56.292 cpu : usr=1.80%, sys=3.80%, ctx=2922, majf=0, minf=2 00:15:56.292 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:56.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:56.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:56.292 issued rwts: total=1386,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:56.292 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:56.292 00:15:56.292 Run status group 0 (all jobs): 00:15:56.292 READ: bw=5538KiB/s (5671kB/s), 5538KiB/s-5538KiB/s (5671kB/s-5671kB/s), io=5544KiB (5677kB), run=1001-1001msec 00:15:56.292 WRITE: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:15:56.292 00:15:56.292 Disk stats (read/write): 00:15:56.292 nvme0n1: ios=1084/1536, merge=0/0, ticks=574/307, in_queue=881, util=91.78% 00:15:56.292 22:07:38 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:56.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:56.292 22:07:38 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:56.292 22:07:38 -- common/autotest_common.sh@1205 -- # local i=0 00:15:56.292 22:07:38 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:15:56.292 22:07:38 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:56.549 22:07:38 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:15:56.549 22:07:38 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:56.549 22:07:38 -- common/autotest_common.sh@1217 -- # return 0 00:15:56.549 22:07:38 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:56.549 22:07:38 -- target/nmic.sh@53 -- # nvmftestfini 00:15:56.549 22:07:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:56.549 22:07:38 -- nvmf/common.sh@117 -- # sync 00:15:56.549 22:07:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:56.549 22:07:38 -- nvmf/common.sh@120 -- # set +e 00:15:56.549 22:07:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:56.549 22:07:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:56.549 rmmod nvme_tcp 00:15:56.549 rmmod nvme_fabrics 00:15:56.549 rmmod nvme_keyring 00:15:56.549 22:07:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:56.549 22:07:38 -- nvmf/common.sh@124 -- # set -e 00:15:56.549 22:07:38 -- nvmf/common.sh@125 -- # return 0 00:15:56.549 22:07:38 -- nvmf/common.sh@478 -- # '[' -n 3942715 ']' 00:15:56.549 22:07:38 -- nvmf/common.sh@479 -- # killprocess 3942715 00:15:56.549 22:07:38 -- common/autotest_common.sh@936 -- # '[' -z 3942715 ']' 00:15:56.549 22:07:38 -- common/autotest_common.sh@940 -- # kill -0 3942715 00:15:56.549 22:07:38 -- common/autotest_common.sh@941 -- # uname 00:15:56.549 22:07:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:56.549 22:07:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3942715 00:15:56.549 22:07:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:56.549 22:07:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:56.549 22:07:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3942715' 00:15:56.549 killing process with pid 3942715 00:15:56.549 22:07:38 -- common/autotest_common.sh@955 -- # kill 3942715 00:15:56.549 [2024-04-24 22:07:38.655587] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:56.549 22:07:38 -- common/autotest_common.sh@960 -- # wait 3942715 00:15:56.811 22:07:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:56.811 22:07:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:56.811 22:07:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:56.811 22:07:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:56.811 22:07:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:56.811 22:07:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.811 22:07:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.811 22:07:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.371 22:07:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:59.371 00:15:59.371 real 0m10.281s 00:15:59.371 user 0m22.486s 00:15:59.371 sys 0m2.672s 00:15:59.371 22:07:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:59.371 22:07:41 -- common/autotest_common.sh@10 -- # set +x 00:15:59.371 ************************************ 00:15:59.371 END TEST nvmf_nmic 00:15:59.371 ************************************ 00:15:59.371 22:07:41 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:59.371 22:07:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:59.371 22:07:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:59.371 22:07:41 -- common/autotest_common.sh@10 -- # set +x 00:15:59.371 ************************************ 00:15:59.371 START TEST nvmf_fio_target 00:15:59.371 ************************************ 00:15:59.371 22:07:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:59.371 * Looking for test storage... 00:15:59.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:59.372 22:07:41 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:59.372 22:07:41 -- nvmf/common.sh@7 -- # uname -s 00:15:59.372 22:07:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:59.372 22:07:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:59.372 22:07:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:59.372 22:07:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:59.372 22:07:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:59.372 22:07:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:59.372 22:07:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:59.372 22:07:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:59.372 22:07:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:59.372 22:07:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:59.372 22:07:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:59.372 22:07:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:59.372 22:07:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:59.372 22:07:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:59.372 22:07:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:59.372 22:07:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:59.372 22:07:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:59.372 22:07:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.372 22:07:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.372 22:07:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.372 22:07:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.372 22:07:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.372 22:07:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.372 22:07:41 -- paths/export.sh@5 -- # export PATH 00:15:59.372 22:07:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.372 22:07:41 -- nvmf/common.sh@47 -- # : 0 00:15:59.372 22:07:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:59.372 22:07:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:59.372 22:07:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:59.372 22:07:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:59.372 22:07:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:59.372 22:07:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:59.372 22:07:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:59.372 22:07:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:59.372 22:07:41 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:59.372 22:07:41 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:59.372 22:07:41 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:59.372 22:07:41 -- target/fio.sh@16 -- # nvmftestinit 00:15:59.372 22:07:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:59.372 22:07:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:59.372 22:07:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:59.372 22:07:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:59.372 22:07:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:59.372 22:07:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.372 22:07:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.372 22:07:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.372 22:07:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:59.372 22:07:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:59.372 22:07:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:59.372 22:07:41 -- common/autotest_common.sh@10 -- # set +x 00:16:01.289 22:07:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:01.289 22:07:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:01.289 22:07:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:01.289 22:07:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:01.289 22:07:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:01.289 22:07:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:01.289 22:07:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:01.289 22:07:43 -- nvmf/common.sh@295 -- # net_devs=() 00:16:01.289 22:07:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:01.289 22:07:43 -- nvmf/common.sh@296 -- # e810=() 00:16:01.289 22:07:43 -- nvmf/common.sh@296 -- # local -ga e810 00:16:01.289 22:07:43 -- nvmf/common.sh@297 -- # x722=() 00:16:01.289 22:07:43 -- nvmf/common.sh@297 -- # local -ga x722 00:16:01.289 22:07:43 -- nvmf/common.sh@298 -- # mlx=() 00:16:01.289 22:07:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:01.289 22:07:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:01.289 22:07:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:01.289 22:07:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:01.289 22:07:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:01.289 22:07:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:01.289 22:07:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:01.289 22:07:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:01.289 22:07:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:01.289 22:07:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:01.289 22:07:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:01.289 22:07:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:01.289 22:07:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:01.289 22:07:43 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:01.289 22:07:43 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:01.289 22:07:43 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:01.289 22:07:43 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:01.289 22:07:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:01.289 22:07:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:01.289 22:07:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:01.289 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:01.289 22:07:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:01.289 22:07:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:01.289 22:07:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.289 22:07:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.289 22:07:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:01.289 22:07:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:01.289 22:07:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:01.289 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:01.289 22:07:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:01.289 22:07:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:01.289 22:07:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.289 22:07:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.289 22:07:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:01.289 22:07:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:01.289 22:07:43 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:01.289 22:07:43 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:01.289 22:07:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:01.289 22:07:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.289 22:07:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:01.289 22:07:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.289 22:07:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:01.289 Found net devices under 0000:84:00.0: cvl_0_0 00:16:01.289 22:07:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.289 22:07:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:01.289 22:07:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.289 22:07:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:01.289 22:07:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.289 22:07:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:01.289 Found net devices under 0000:84:00.1: cvl_0_1 00:16:01.289 22:07:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.289 22:07:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:01.289 22:07:43 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:01.289 22:07:43 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:01.289 22:07:43 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:01.289 22:07:43 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:01.289 22:07:43 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:01.289 22:07:43 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:01.289 22:07:43 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:01.289 22:07:43 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:01.289 22:07:43 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:01.289 22:07:43 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:01.289 22:07:43 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:01.289 22:07:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:01.289 22:07:43 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:01.289 22:07:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:01.289 22:07:43 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:01.289 22:07:43 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:01.289 22:07:43 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:01.549 22:07:43 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:01.549 22:07:43 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:01.549 22:07:43 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:01.549 22:07:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:01.549 22:07:43 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:01.549 22:07:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:01.549 22:07:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:01.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:01.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:16:01.549 00:16:01.549 --- 10.0.0.2 ping statistics --- 00:16:01.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.549 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:16:01.549 22:07:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:01.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:01.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:16:01.549 00:16:01.549 --- 10.0.0.1 ping statistics --- 00:16:01.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.549 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:16:01.549 22:07:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:01.549 22:07:43 -- nvmf/common.sh@411 -- # return 0 00:16:01.549 22:07:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:01.549 22:07:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:01.549 22:07:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:01.549 22:07:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:01.549 22:07:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:01.549 22:07:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:01.549 22:07:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:01.549 22:07:43 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:01.549 22:07:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:01.549 22:07:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:01.549 22:07:43 -- common/autotest_common.sh@10 -- # set +x 00:16:01.549 22:07:43 -- nvmf/common.sh@470 -- # nvmfpid=3945444 00:16:01.549 22:07:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:01.549 22:07:43 -- nvmf/common.sh@471 -- # waitforlisten 3945444 00:16:01.549 22:07:43 -- common/autotest_common.sh@817 -- # '[' -z 3945444 ']' 00:16:01.549 22:07:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.549 22:07:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:01.549 22:07:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.549 22:07:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:01.549 22:07:43 -- common/autotest_common.sh@10 -- # set +x 00:16:01.549 [2024-04-24 22:07:43.707543] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:16:01.549 [2024-04-24 22:07:43.707630] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.549 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.549 [2024-04-24 22:07:43.792222] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:01.808 [2024-04-24 22:07:43.916021] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:01.808 [2024-04-24 22:07:43.916094] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:01.808 [2024-04-24 22:07:43.916110] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:01.808 [2024-04-24 22:07:43.916124] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:01.808 [2024-04-24 22:07:43.916137] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:01.808 [2024-04-24 22:07:43.916250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.808 [2024-04-24 22:07:43.916323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:01.808 [2024-04-24 22:07:43.916376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:01.808 [2024-04-24 22:07:43.916379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.808 22:07:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:01.808 22:07:44 -- common/autotest_common.sh@850 -- # return 0 00:16:01.808 22:07:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:01.808 22:07:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:01.808 22:07:44 -- common/autotest_common.sh@10 -- # set +x 00:16:02.067 22:07:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:02.067 22:07:44 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:02.325 [2024-04-24 22:07:44.387492] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:02.325 22:07:44 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:02.584 22:07:44 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:02.584 22:07:44 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:02.842 22:07:44 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:02.842 22:07:44 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:03.101 22:07:45 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:03.101 22:07:45 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:03.359 22:07:45 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:03.359 22:07:45 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:03.924 22:07:45 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:04.182 22:07:46 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:04.182 22:07:46 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:04.440 22:07:46 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:04.440 22:07:46 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:04.698 22:07:46 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:04.698 22:07:46 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:04.956 22:07:47 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:05.214 22:07:47 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:05.214 22:07:47 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:05.472 22:07:47 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:05.472 22:07:47 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:05.730 22:07:47 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:06.296 [2024-04-24 22:07:48.361060] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:06.296 [2024-04-24 22:07:48.361435] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:06.296 22:07:48 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:06.554 22:07:48 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:06.811 22:07:49 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:07.376 22:07:49 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:07.377 22:07:49 -- common/autotest_common.sh@1184 -- # local i=0 00:16:07.377 22:07:49 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:07.377 22:07:49 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:16:07.377 22:07:49 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:16:07.377 22:07:49 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:09.903 22:07:51 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:09.903 22:07:51 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:09.903 22:07:51 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:09.903 22:07:51 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:16:09.903 22:07:51 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:09.903 22:07:51 -- common/autotest_common.sh@1194 -- # return 0 00:16:09.903 22:07:51 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:09.903 [global] 00:16:09.903 thread=1 00:16:09.903 invalidate=1 00:16:09.903 rw=write 00:16:09.903 time_based=1 00:16:09.903 runtime=1 00:16:09.903 ioengine=libaio 00:16:09.903 direct=1 00:16:09.903 bs=4096 00:16:09.903 iodepth=1 00:16:09.903 norandommap=0 00:16:09.903 numjobs=1 00:16:09.903 00:16:09.903 verify_dump=1 00:16:09.903 verify_backlog=512 00:16:09.903 verify_state_save=0 00:16:09.903 do_verify=1 00:16:09.903 verify=crc32c-intel 00:16:09.903 [job0] 00:16:09.903 filename=/dev/nvme0n1 00:16:09.903 [job1] 00:16:09.903 filename=/dev/nvme0n2 00:16:09.903 [job2] 00:16:09.903 filename=/dev/nvme0n3 00:16:09.903 [job3] 00:16:09.903 filename=/dev/nvme0n4 00:16:09.903 Could not set queue depth (nvme0n1) 00:16:09.903 Could not set queue depth (nvme0n2) 00:16:09.903 Could not set queue depth (nvme0n3) 00:16:09.903 Could not set queue depth (nvme0n4) 00:16:09.903 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:09.903 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:09.903 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:09.903 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:09.903 fio-3.35 00:16:09.903 Starting 4 threads 00:16:11.276 00:16:11.276 job0: (groupid=0, jobs=1): err= 0: pid=3946642: Wed Apr 24 22:07:53 2024 00:16:11.276 read: IOPS=570, BW=2282KiB/s (2337kB/s)(2348KiB/1029msec) 00:16:11.276 slat (nsec): min=7849, max=33240, avg=10862.53, stdev=2938.34 00:16:11.276 clat (usec): min=282, max=41456, avg=1249.43, stdev=5993.90 00:16:11.276 lat (usec): min=292, max=41473, avg=1260.30, stdev=5994.47 00:16:11.276 clat percentiles (usec): 00:16:11.276 | 1.00th=[ 285], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 306], 00:16:11.276 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 330], 00:16:11.276 | 70.00th=[ 343], 80.00th=[ 404], 90.00th=[ 461], 95.00th=[ 510], 00:16:11.276 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:16:11.276 | 99.99th=[41681] 00:16:11.276 write: IOPS=995, BW=3981KiB/s (4076kB/s)(4096KiB/1029msec); 0 zone resets 00:16:11.276 slat (nsec): min=9213, max=30753, avg=13039.61, stdev=1799.38 00:16:11.276 clat (usec): min=200, max=398, avg=263.16, stdev=32.02 00:16:11.276 lat (usec): min=211, max=413, avg=276.20, stdev=32.19 00:16:11.276 clat percentiles (usec): 00:16:11.276 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 223], 20.00th=[ 241], 00:16:11.276 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 265], 00:16:11.276 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 306], 95.00th=[ 322], 00:16:11.276 | 99.00th=[ 355], 99.50th=[ 371], 99.90th=[ 396], 99.95th=[ 400], 00:16:11.276 | 99.99th=[ 400] 00:16:11.276 bw ( KiB/s): min= 8192, max= 8192, per=51.45%, avg=8192.00, stdev= 0.00, samples=1 00:16:11.276 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:11.276 lat (usec) : 250=21.91%, 500=75.98%, 750=1.06%, 1000=0.25% 00:16:11.276 lat (msec) : 50=0.81% 00:16:11.276 cpu : usr=1.26%, sys=1.75%, ctx=1613, majf=0, minf=2 00:16:11.276 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:11.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.276 issued rwts: total=587,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.276 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:11.276 job1: (groupid=0, jobs=1): err= 0: pid=3946645: Wed Apr 24 22:07:53 2024 00:16:11.276 read: IOPS=1392, BW=5570KiB/s (5704kB/s)(5576KiB/1001msec) 00:16:11.276 slat (nsec): min=5914, max=31161, avg=13120.87, stdev=3715.02 00:16:11.276 clat (usec): min=255, max=778, avg=409.58, stdev=85.09 00:16:11.276 lat (usec): min=263, max=793, avg=422.70, stdev=86.73 00:16:11.276 clat percentiles (usec): 00:16:11.276 | 1.00th=[ 269], 5.00th=[ 281], 10.00th=[ 293], 20.00th=[ 334], 00:16:11.276 | 30.00th=[ 359], 40.00th=[ 383], 50.00th=[ 404], 60.00th=[ 424], 00:16:11.276 | 70.00th=[ 453], 80.00th=[ 490], 90.00th=[ 529], 95.00th=[ 562], 00:16:11.276 | 99.00th=[ 611], 99.50th=[ 635], 99.90th=[ 676], 99.95th=[ 775], 00:16:11.276 | 99.99th=[ 775] 00:16:11.276 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:11.276 slat (nsec): min=7741, max=37695, avg=12691.43, stdev=4361.15 00:16:11.276 clat (usec): min=167, max=527, avg=248.00, stdev=55.76 00:16:11.276 lat (usec): min=179, max=547, avg=260.69, stdev=57.56 00:16:11.276 clat percentiles (usec): 00:16:11.276 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 196], 00:16:11.276 | 30.00th=[ 208], 40.00th=[ 221], 50.00th=[ 237], 60.00th=[ 253], 00:16:11.276 | 70.00th=[ 277], 80.00th=[ 293], 90.00th=[ 318], 95.00th=[ 371], 00:16:11.276 | 99.00th=[ 400], 99.50th=[ 412], 99.90th=[ 515], 99.95th=[ 529], 00:16:11.276 | 99.99th=[ 529] 00:16:11.276 bw ( KiB/s): min= 7784, max= 7784, per=48.89%, avg=7784.00, stdev= 0.00, samples=1 00:16:11.276 iops : min= 1946, max= 1946, avg=1946.00, stdev= 0.00, samples=1 00:16:11.276 lat (usec) : 250=30.58%, 500=61.64%, 750=7.75%, 1000=0.03% 00:16:11.276 cpu : usr=2.00%, sys=4.30%, ctx=2931, majf=0, minf=1 00:16:11.276 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:11.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.276 issued rwts: total=1394,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.276 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:11.276 job2: (groupid=0, jobs=1): err= 0: pid=3946646: Wed Apr 24 22:07:53 2024 00:16:11.276 read: IOPS=1011, BW=4048KiB/s (4145kB/s)(4052KiB/1001msec) 00:16:11.276 slat (nsec): min=5407, max=42145, avg=11846.51, stdev=4121.93 00:16:11.276 clat (usec): min=267, max=41164, avg=659.50, stdev=3113.51 00:16:11.276 lat (usec): min=277, max=41174, avg=671.34, stdev=3113.77 00:16:11.276 clat percentiles (usec): 00:16:11.276 | 1.00th=[ 285], 5.00th=[ 326], 10.00th=[ 347], 20.00th=[ 371], 00:16:11.276 | 30.00th=[ 383], 40.00th=[ 392], 50.00th=[ 404], 60.00th=[ 420], 00:16:11.276 | 70.00th=[ 449], 80.00th=[ 482], 90.00th=[ 510], 95.00th=[ 537], 00:16:11.276 | 99.00th=[ 635], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:16:11.276 | 99.99th=[41157] 00:16:11.277 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:16:11.277 slat (usec): min=9, max=39589, avg=56.09, stdev=1245.43 00:16:11.277 clat (usec): min=179, max=471, avg=249.19, stdev=41.67 00:16:11.277 lat (usec): min=192, max=40061, avg=305.28, stdev=1253.56 00:16:11.277 clat percentiles (usec): 00:16:11.277 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 210], 00:16:11.277 | 30.00th=[ 219], 40.00th=[ 235], 50.00th=[ 247], 60.00th=[ 260], 00:16:11.277 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 314], 00:16:11.277 | 99.00th=[ 396], 99.50th=[ 420], 99.90th=[ 449], 99.95th=[ 474], 00:16:11.277 | 99.99th=[ 474] 00:16:11.277 bw ( KiB/s): min= 5344, max= 5344, per=33.56%, avg=5344.00, stdev= 0.00, samples=1 00:16:11.277 iops : min= 1336, max= 1336, avg=1336.00, stdev= 0.00, samples=1 00:16:11.277 lat (usec) : 250=26.22%, 500=67.11%, 750=6.38% 00:16:11.277 lat (msec) : 50=0.29% 00:16:11.277 cpu : usr=2.00%, sys=2.80%, ctx=2041, majf=0, minf=1 00:16:11.277 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:11.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.277 issued rwts: total=1013,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.277 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:11.277 job3: (groupid=0, jobs=1): err= 0: pid=3946647: Wed Apr 24 22:07:53 2024 00:16:11.277 read: IOPS=20, BW=82.6KiB/s (84.6kB/s)(84.0KiB/1017msec) 00:16:11.277 slat (nsec): min=10295, max=17117, avg=15609.48, stdev=1295.79 00:16:11.277 clat (usec): min=40946, max=41401, avg=40998.46, stdev=93.47 00:16:11.277 lat (usec): min=40962, max=41411, avg=41014.07, stdev=92.28 00:16:11.277 clat percentiles (usec): 00:16:11.277 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:16:11.277 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:11.277 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:11.277 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:11.277 | 99.99th=[41157] 00:16:11.277 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:16:11.277 slat (nsec): min=8835, max=54995, avg=13506.48, stdev=4264.80 00:16:11.277 clat (usec): min=207, max=1152, avg=285.61, stdev=58.76 00:16:11.277 lat (usec): min=219, max=1161, avg=299.12, stdev=58.87 00:16:11.277 clat percentiles (usec): 00:16:11.277 | 1.00th=[ 219], 5.00th=[ 229], 10.00th=[ 237], 20.00th=[ 249], 00:16:11.277 | 30.00th=[ 260], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 293], 00:16:11.277 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 326], 95.00th=[ 367], 00:16:11.277 | 99.00th=[ 457], 99.50th=[ 515], 99.90th=[ 1156], 99.95th=[ 1156], 00:16:11.277 | 99.99th=[ 1156] 00:16:11.277 bw ( KiB/s): min= 4096, max= 4096, per=25.73%, avg=4096.00, stdev= 0.00, samples=1 00:16:11.277 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:11.277 lat (usec) : 250=20.45%, 500=74.86%, 750=0.56% 00:16:11.277 lat (msec) : 2=0.19%, 50=3.94% 00:16:11.277 cpu : usr=0.30%, sys=0.89%, ctx=534, majf=0, minf=1 00:16:11.277 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:11.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.277 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.277 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:11.277 00:16:11.277 Run status group 0 (all jobs): 00:16:11.277 READ: bw=11.4MiB/s (12.0MB/s), 82.6KiB/s-5570KiB/s (84.6kB/s-5704kB/s), io=11.8MiB (12.3MB), run=1001-1029msec 00:16:11.277 WRITE: bw=15.5MiB/s (16.3MB/s), 2014KiB/s-6138KiB/s (2062kB/s-6285kB/s), io=16.0MiB (16.8MB), run=1001-1029msec 00:16:11.277 00:16:11.277 Disk stats (read/write): 00:16:11.277 nvme0n1: ios=546/1024, merge=0/0, ticks=1475/259, in_queue=1734, util=99.70% 00:16:11.277 nvme0n2: ios=1076/1373, merge=0/0, ticks=818/337, in_queue=1155, util=100.00% 00:16:11.277 nvme0n3: ios=798/1024, merge=0/0, ticks=765/241, in_queue=1006, util=96.71% 00:16:11.277 nvme0n4: ios=16/512, merge=0/0, ticks=657/144, in_queue=801, util=89.42% 00:16:11.277 22:07:53 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:11.277 [global] 00:16:11.277 thread=1 00:16:11.277 invalidate=1 00:16:11.277 rw=randwrite 00:16:11.277 time_based=1 00:16:11.277 runtime=1 00:16:11.277 ioengine=libaio 00:16:11.277 direct=1 00:16:11.277 bs=4096 00:16:11.277 iodepth=1 00:16:11.277 norandommap=0 00:16:11.277 numjobs=1 00:16:11.277 00:16:11.277 verify_dump=1 00:16:11.277 verify_backlog=512 00:16:11.277 verify_state_save=0 00:16:11.277 do_verify=1 00:16:11.277 verify=crc32c-intel 00:16:11.277 [job0] 00:16:11.277 filename=/dev/nvme0n1 00:16:11.277 [job1] 00:16:11.277 filename=/dev/nvme0n2 00:16:11.277 [job2] 00:16:11.277 filename=/dev/nvme0n3 00:16:11.277 [job3] 00:16:11.277 filename=/dev/nvme0n4 00:16:11.277 Could not set queue depth (nvme0n1) 00:16:11.277 Could not set queue depth (nvme0n2) 00:16:11.277 Could not set queue depth (nvme0n3) 00:16:11.277 Could not set queue depth (nvme0n4) 00:16:11.277 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:11.277 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:11.277 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:11.277 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:11.277 fio-3.35 00:16:11.277 Starting 4 threads 00:16:12.651 00:16:12.651 job0: (groupid=0, jobs=1): err= 0: pid=3946878: Wed Apr 24 22:07:54 2024 00:16:12.651 read: IOPS=506, BW=2027KiB/s (2076kB/s)(2100KiB/1036msec) 00:16:12.651 slat (nsec): min=6624, max=31330, avg=10780.82, stdev=3825.58 00:16:12.651 clat (usec): min=265, max=41053, avg=1441.90, stdev=6548.65 00:16:12.651 lat (usec): min=273, max=41069, avg=1452.69, stdev=6549.30 00:16:12.651 clat percentiles (usec): 00:16:12.651 | 1.00th=[ 277], 5.00th=[ 285], 10.00th=[ 293], 20.00th=[ 310], 00:16:12.651 | 30.00th=[ 322], 40.00th=[ 334], 50.00th=[ 351], 60.00th=[ 367], 00:16:12.651 | 70.00th=[ 379], 80.00th=[ 396], 90.00th=[ 465], 95.00th=[ 519], 00:16:12.651 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:12.651 | 99.99th=[41157] 00:16:12.651 write: IOPS=988, BW=3954KiB/s (4049kB/s)(4096KiB/1036msec); 0 zone resets 00:16:12.651 slat (nsec): min=8762, max=35232, avg=11381.81, stdev=2140.86 00:16:12.651 clat (usec): min=188, max=436, avg=250.29, stdev=34.37 00:16:12.651 lat (usec): min=199, max=454, avg=261.67, stdev=34.49 00:16:12.651 clat percentiles (usec): 00:16:12.651 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 221], 00:16:12.651 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 253], 00:16:12.651 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 297], 95.00th=[ 314], 00:16:12.651 | 99.00th=[ 363], 99.50th=[ 375], 99.90th=[ 404], 99.95th=[ 437], 00:16:12.651 | 99.99th=[ 437] 00:16:12.651 bw ( KiB/s): min= 48, max= 8144, per=23.02%, avg=4096.00, stdev=5724.74, samples=2 00:16:12.651 iops : min= 12, max= 2036, avg=1024.00, stdev=1431.18, samples=2 00:16:12.651 lat (usec) : 250=34.22%, 500=63.46%, 750=1.29%, 1000=0.13% 00:16:12.651 lat (msec) : 50=0.90% 00:16:12.651 cpu : usr=1.16%, sys=1.74%, ctx=1550, majf=0, minf=1 00:16:12.651 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.651 issued rwts: total=525,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.651 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.651 job1: (groupid=0, jobs=1): err= 0: pid=3946879: Wed Apr 24 22:07:54 2024 00:16:12.651 read: IOPS=673, BW=2695KiB/s (2760kB/s)(2744KiB/1018msec) 00:16:12.651 slat (nsec): min=6783, max=56231, avg=10189.44, stdev=4923.20 00:16:12.651 clat (usec): min=259, max=41032, avg=1096.51, stdev=5538.47 00:16:12.651 lat (usec): min=267, max=41048, avg=1106.70, stdev=5539.01 00:16:12.651 clat percentiles (usec): 00:16:12.651 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 281], 00:16:12.651 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 297], 00:16:12.651 | 70.00th=[ 310], 80.00th=[ 375], 90.00th=[ 469], 95.00th=[ 537], 00:16:12.651 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:12.651 | 99.99th=[41157] 00:16:12.651 write: IOPS=1005, BW=4024KiB/s (4120kB/s)(4096KiB/1018msec); 0 zone resets 00:16:12.651 slat (nsec): min=9837, max=30887, avg=12530.78, stdev=2316.13 00:16:12.651 clat (usec): min=176, max=422, avg=234.09, stdev=35.86 00:16:12.651 lat (usec): min=189, max=433, avg=246.62, stdev=36.55 00:16:12.651 clat percentiles (usec): 00:16:12.651 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 202], 00:16:12.651 | 30.00th=[ 210], 40.00th=[ 221], 50.00th=[ 229], 60.00th=[ 237], 00:16:12.651 | 70.00th=[ 245], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 297], 00:16:12.651 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 383], 99.95th=[ 424], 00:16:12.651 | 99.99th=[ 424] 00:16:12.651 bw ( KiB/s): min= 4096, max= 4096, per=23.02%, avg=4096.00, stdev= 0.00, samples=2 00:16:12.651 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:16:12.651 lat (usec) : 250=43.51%, 500=53.98%, 750=1.70% 00:16:12.651 lat (msec) : 2=0.06%, 50=0.76% 00:16:12.651 cpu : usr=1.77%, sys=2.06%, ctx=1711, majf=0, minf=1 00:16:12.651 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.651 issued rwts: total=686,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.651 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.651 job2: (groupid=0, jobs=1): err= 0: pid=3946880: Wed Apr 24 22:07:54 2024 00:16:12.651 read: IOPS=516, BW=2065KiB/s (2114kB/s)(2100KiB/1017msec) 00:16:12.651 slat (nsec): min=5344, max=33836, avg=13464.95, stdev=4690.86 00:16:12.651 clat (usec): min=280, max=41035, avg=1382.13, stdev=6307.63 00:16:12.651 lat (usec): min=287, max=41069, avg=1395.59, stdev=6308.41 00:16:12.651 clat percentiles (usec): 00:16:12.651 | 1.00th=[ 285], 5.00th=[ 297], 10.00th=[ 310], 20.00th=[ 330], 00:16:12.651 | 30.00th=[ 343], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 379], 00:16:12.651 | 70.00th=[ 400], 80.00th=[ 420], 90.00th=[ 465], 95.00th=[ 537], 00:16:12.651 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:12.651 | 99.99th=[41157] 00:16:12.651 write: IOPS=1006, BW=4028KiB/s (4124kB/s)(4096KiB/1017msec); 0 zone resets 00:16:12.651 slat (nsec): min=7309, max=30602, avg=12639.29, stdev=4646.25 00:16:12.651 clat (usec): min=195, max=1175, avg=259.17, stdev=58.80 00:16:12.651 lat (usec): min=203, max=1193, avg=271.81, stdev=59.31 00:16:12.651 clat percentiles (usec): 00:16:12.651 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 210], 20.00th=[ 217], 00:16:12.651 | 30.00th=[ 221], 40.00th=[ 229], 50.00th=[ 237], 60.00th=[ 269], 00:16:12.652 | 70.00th=[ 277], 80.00th=[ 306], 90.00th=[ 330], 95.00th=[ 359], 00:16:12.652 | 99.00th=[ 416], 99.50th=[ 433], 99.90th=[ 545], 99.95th=[ 1172], 00:16:12.652 | 99.99th=[ 1172] 00:16:12.652 bw ( KiB/s): min= 672, max= 7520, per=23.02%, avg=4096.00, stdev=4842.27, samples=2 00:16:12.652 iops : min= 168, max= 1880, avg=1024.00, stdev=1210.57, samples=2 00:16:12.652 lat (usec) : 250=37.31%, 500=60.17%, 750=1.55% 00:16:12.652 lat (msec) : 2=0.13%, 50=0.84% 00:16:12.652 cpu : usr=0.98%, sys=2.07%, ctx=1550, majf=0, minf=2 00:16:12.652 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.652 issued rwts: total=525,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.652 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.652 job3: (groupid=0, jobs=1): err= 0: pid=3946881: Wed Apr 24 22:07:54 2024 00:16:12.652 read: IOPS=1369, BW=5479KiB/s (5610kB/s)(5484KiB/1001msec) 00:16:12.652 slat (nsec): min=5554, max=40430, avg=9442.51, stdev=3428.41 00:16:12.652 clat (usec): min=288, max=40981, avg=438.89, stdev=1551.51 00:16:12.652 lat (usec): min=295, max=40996, avg=448.33, stdev=1551.62 00:16:12.652 clat percentiles (usec): 00:16:12.652 | 1.00th=[ 310], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 334], 00:16:12.652 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 363], 60.00th=[ 379], 00:16:12.652 | 70.00th=[ 400], 80.00th=[ 420], 90.00th=[ 453], 95.00th=[ 486], 00:16:12.652 | 99.00th=[ 545], 99.50th=[ 594], 99.90th=[40633], 99.95th=[41157], 00:16:12.652 | 99.99th=[41157] 00:16:12.652 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:12.652 slat (nsec): min=7259, max=40428, avg=10713.18, stdev=3148.65 00:16:12.652 clat (usec): min=183, max=1268, avg=234.77, stdev=51.13 00:16:12.652 lat (usec): min=192, max=1278, avg=245.48, stdev=51.78 00:16:12.652 clat percentiles (usec): 00:16:12.652 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 202], 00:16:12.652 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 231], 00:16:12.652 | 70.00th=[ 241], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 306], 00:16:12.652 | 99.00th=[ 375], 99.50th=[ 400], 99.90th=[ 840], 99.95th=[ 1270], 00:16:12.652 | 99.99th=[ 1270] 00:16:12.652 bw ( KiB/s): min= 6080, max= 6080, per=34.17%, avg=6080.00, stdev= 0.00, samples=1 00:16:12.652 iops : min= 1520, max= 1520, avg=1520.00, stdev= 0.00, samples=1 00:16:12.652 lat (usec) : 250=40.11%, 500=58.34%, 750=1.31%, 1000=0.07% 00:16:12.652 lat (msec) : 2=0.07%, 4=0.03%, 50=0.07% 00:16:12.652 cpu : usr=1.90%, sys=3.60%, ctx=2908, majf=0, minf=1 00:16:12.652 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.652 issued rwts: total=1371,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.652 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.652 00:16:12.652 Run status group 0 (all jobs): 00:16:12.652 READ: bw=11.7MiB/s (12.3MB/s), 2027KiB/s-5479KiB/s (2076kB/s-5610kB/s), io=12.1MiB (12.7MB), run=1001-1036msec 00:16:12.652 WRITE: bw=17.4MiB/s (18.2MB/s), 3954KiB/s-6138KiB/s (4049kB/s-6285kB/s), io=18.0MiB (18.9MB), run=1001-1036msec 00:16:12.652 00:16:12.652 Disk stats (read/write): 00:16:12.652 nvme0n1: ios=546/1024, merge=0/0, ticks=1536/259, in_queue=1795, util=96.39% 00:16:12.652 nvme0n2: ios=718/1024, merge=0/0, ticks=964/236, in_queue=1200, util=97.76% 00:16:12.652 nvme0n3: ios=577/1024, merge=0/0, ticks=1094/260, in_queue=1354, util=96.40% 00:16:12.652 nvme0n4: ios=1068/1381, merge=0/0, ticks=1436/316, in_queue=1752, util=99.25% 00:16:12.652 22:07:54 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:12.652 [global] 00:16:12.652 thread=1 00:16:12.652 invalidate=1 00:16:12.652 rw=write 00:16:12.652 time_based=1 00:16:12.652 runtime=1 00:16:12.652 ioengine=libaio 00:16:12.652 direct=1 00:16:12.652 bs=4096 00:16:12.652 iodepth=128 00:16:12.652 norandommap=0 00:16:12.652 numjobs=1 00:16:12.652 00:16:12.652 verify_dump=1 00:16:12.652 verify_backlog=512 00:16:12.652 verify_state_save=0 00:16:12.652 do_verify=1 00:16:12.652 verify=crc32c-intel 00:16:12.652 [job0] 00:16:12.652 filename=/dev/nvme0n1 00:16:12.652 [job1] 00:16:12.652 filename=/dev/nvme0n2 00:16:12.652 [job2] 00:16:12.652 filename=/dev/nvme0n3 00:16:12.652 [job3] 00:16:12.652 filename=/dev/nvme0n4 00:16:12.652 Could not set queue depth (nvme0n1) 00:16:12.652 Could not set queue depth (nvme0n2) 00:16:12.652 Could not set queue depth (nvme0n3) 00:16:12.652 Could not set queue depth (nvme0n4) 00:16:12.912 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:12.912 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:12.912 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:12.912 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:12.912 fio-3.35 00:16:12.912 Starting 4 threads 00:16:14.285 00:16:14.285 job0: (groupid=0, jobs=1): err= 0: pid=3947107: Wed Apr 24 22:07:56 2024 00:16:14.285 read: IOPS=2700, BW=10.5MiB/s (11.1MB/s)(10.6MiB/1007msec) 00:16:14.285 slat (usec): min=2, max=21177, avg=190.91, stdev=1325.51 00:16:14.285 clat (usec): min=1845, max=84890, avg=25378.02, stdev=9935.32 00:16:14.285 lat (usec): min=9031, max=90450, avg=25568.93, stdev=10001.31 00:16:14.285 clat percentiles (usec): 00:16:14.285 | 1.00th=[10552], 5.00th=[12256], 10.00th=[15795], 20.00th=[18482], 00:16:14.285 | 30.00th=[20317], 40.00th=[21365], 50.00th=[22938], 60.00th=[25297], 00:16:14.285 | 70.00th=[27132], 80.00th=[31851], 90.00th=[38536], 95.00th=[45351], 00:16:14.285 | 99.00th=[54264], 99.50th=[84411], 99.90th=[84411], 99.95th=[84411], 00:16:14.285 | 99.99th=[84411] 00:16:14.285 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:16:14.285 slat (usec): min=4, max=18688, avg=152.53, stdev=1058.74 00:16:14.285 clat (usec): min=6343, max=40890, avg=18178.83, stdev=6716.92 00:16:14.285 lat (usec): min=6350, max=40896, avg=18331.36, stdev=6796.61 00:16:14.285 clat percentiles (usec): 00:16:14.285 | 1.00th=[ 6521], 5.00th=[10159], 10.00th=[11600], 20.00th=[12518], 00:16:14.285 | 30.00th=[13042], 40.00th=[14353], 50.00th=[17433], 60.00th=[18220], 00:16:14.285 | 70.00th=[20841], 80.00th=[24249], 90.00th=[27657], 95.00th=[30802], 00:16:14.285 | 99.00th=[38536], 99.50th=[39584], 99.90th=[40633], 99.95th=[40633], 00:16:14.285 | 99.99th=[40633] 00:16:14.285 bw ( KiB/s): min=12288, max=12288, per=25.60%, avg=12288.00, stdev= 0.00, samples=2 00:16:14.285 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:16:14.285 lat (msec) : 2=0.02%, 10=2.78%, 20=46.50%, 50=49.80%, 100=0.90% 00:16:14.285 cpu : usr=1.89%, sys=2.78%, ctx=223, majf=0, minf=1 00:16:14.285 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:14.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.285 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:14.285 issued rwts: total=2719,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.285 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:14.285 job1: (groupid=0, jobs=1): err= 0: pid=3947108: Wed Apr 24 22:07:56 2024 00:16:14.285 read: IOPS=2340, BW=9363KiB/s (9587kB/s)(9812KiB/1048msec) 00:16:14.285 slat (usec): min=2, max=19981, avg=164.56, stdev=1289.99 00:16:14.285 clat (usec): min=9178, max=78533, avg=25372.36, stdev=12492.40 00:16:14.285 lat (usec): min=9211, max=78537, avg=25536.92, stdev=12573.71 00:16:14.285 clat percentiles (usec): 00:16:14.285 | 1.00th=[10028], 5.00th=[10945], 10.00th=[12780], 20.00th=[13698], 00:16:14.285 | 30.00th=[17433], 40.00th=[20841], 50.00th=[23200], 60.00th=[26346], 00:16:14.285 | 70.00th=[29230], 80.00th=[32375], 90.00th=[36963], 95.00th=[51119], 00:16:14.285 | 99.00th=[73925], 99.50th=[73925], 99.90th=[78119], 99.95th=[78119], 00:16:14.285 | 99.99th=[78119] 00:16:14.285 write: IOPS=2442, BW=9771KiB/s (10.0MB/s)(10.0MiB/1048msec); 0 zone resets 00:16:14.285 slat (usec): min=3, max=14931, avg=191.64, stdev=951.82 00:16:14.285 clat (usec): min=4295, max=84386, avg=27585.88, stdev=21512.80 00:16:14.285 lat (usec): min=4301, max=84394, avg=27777.52, stdev=21659.16 00:16:14.285 clat percentiles (usec): 00:16:14.285 | 1.00th=[ 6063], 5.00th=[ 7767], 10.00th=[ 8848], 20.00th=[10552], 00:16:14.285 | 30.00th=[12518], 40.00th=[13566], 50.00th=[16188], 60.00th=[22414], 00:16:14.285 | 70.00th=[34866], 80.00th=[49546], 90.00th=[64226], 95.00th=[73925], 00:16:14.285 | 99.00th=[83362], 99.50th=[84411], 99.90th=[84411], 99.95th=[84411], 00:16:14.285 | 99.99th=[84411] 00:16:14.285 bw ( KiB/s): min= 8192, max=12288, per=21.33%, avg=10240.00, stdev=2896.31, samples=2 00:16:14.285 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:16:14.285 lat (msec) : 10=9.93%, 20=36.66%, 50=39.72%, 100=13.68% 00:16:14.285 cpu : usr=1.91%, sys=2.20%, ctx=267, majf=0, minf=1 00:16:14.285 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:16:14.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:14.286 issued rwts: total=2453,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:14.286 job2: (groupid=0, jobs=1): err= 0: pid=3947109: Wed Apr 24 22:07:56 2024 00:16:14.286 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:16:14.286 slat (usec): min=2, max=22263, avg=118.83, stdev=804.70 00:16:14.286 clat (usec): min=5021, max=41284, avg=14945.72, stdev=5426.92 00:16:14.286 lat (usec): min=5027, max=41290, avg=15064.55, stdev=5457.32 00:16:14.286 clat percentiles (usec): 00:16:14.286 | 1.00th=[ 7373], 5.00th=[ 8455], 10.00th=[10421], 20.00th=[11731], 00:16:14.286 | 30.00th=[12256], 40.00th=[13042], 50.00th=[13698], 60.00th=[14484], 00:16:14.286 | 70.00th=[16188], 80.00th=[17171], 90.00th=[19530], 95.00th=[27657], 00:16:14.286 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:14.286 | 99.99th=[41157] 00:16:14.286 write: IOPS=4380, BW=17.1MiB/s (17.9MB/s)(17.1MiB/1001msec); 0 zone resets 00:16:14.286 slat (usec): min=4, max=15963, avg=110.58, stdev=787.09 00:16:14.286 clat (usec): min=447, max=37470, avg=14864.14, stdev=5390.24 00:16:14.286 lat (usec): min=4934, max=37485, avg=14974.72, stdev=5411.04 00:16:14.286 clat percentiles (usec): 00:16:14.286 | 1.00th=[ 5473], 5.00th=[ 8225], 10.00th=[ 9372], 20.00th=[10814], 00:16:14.286 | 30.00th=[12649], 40.00th=[13304], 50.00th=[13829], 60.00th=[14615], 00:16:14.286 | 70.00th=[15401], 80.00th=[16712], 90.00th=[22152], 95.00th=[28181], 00:16:14.286 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:16:14.286 | 99.99th=[37487] 00:16:14.286 bw ( KiB/s): min=16440, max=16440, per=34.25%, avg=16440.00, stdev= 0.00, samples=1 00:16:14.286 iops : min= 4110, max= 4110, avg=4110.00, stdev= 0.00, samples=1 00:16:14.286 lat (usec) : 500=0.01% 00:16:14.286 lat (msec) : 10=10.84%, 20=77.99%, 50=11.17% 00:16:14.286 cpu : usr=2.20%, sys=4.70%, ctx=287, majf=0, minf=1 00:16:14.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:14.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:14.286 issued rwts: total=4096,4385,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:14.286 job3: (groupid=0, jobs=1): err= 0: pid=3947110: Wed Apr 24 22:07:56 2024 00:16:14.286 read: IOPS=2535, BW=9.90MiB/s (10.4MB/s)(9.97MiB/1007msec) 00:16:14.286 slat (usec): min=3, max=22513, avg=210.83, stdev=1331.35 00:16:14.286 clat (usec): min=3107, max=57969, avg=26008.77, stdev=11673.82 00:16:14.286 lat (usec): min=10260, max=67371, avg=26219.61, stdev=11759.76 00:16:14.286 clat percentiles (usec): 00:16:14.286 | 1.00th=[11076], 5.00th=[12780], 10.00th=[14353], 20.00th=[15795], 00:16:14.286 | 30.00th=[17171], 40.00th=[19792], 50.00th=[22938], 60.00th=[25822], 00:16:14.286 | 70.00th=[30540], 80.00th=[36439], 90.00th=[45351], 95.00th=[49546], 00:16:14.286 | 99.00th=[56361], 99.50th=[57934], 99.90th=[57934], 99.95th=[57934], 00:16:14.286 | 99.99th=[57934] 00:16:14.286 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:16:14.286 slat (usec): min=5, max=14001, avg=173.47, stdev=1061.87 00:16:14.286 clat (usec): min=7647, max=67330, avg=23638.63, stdev=9984.80 00:16:14.286 lat (usec): min=7655, max=67341, avg=23812.10, stdev=10035.58 00:16:14.286 clat percentiles (usec): 00:16:14.286 | 1.00th=[11076], 5.00th=[12125], 10.00th=[13435], 20.00th=[14746], 00:16:14.286 | 30.00th=[16188], 40.00th=[18744], 50.00th=[21103], 60.00th=[22938], 00:16:14.286 | 70.00th=[27919], 80.00th=[34866], 90.00th=[37487], 95.00th=[43254], 00:16:14.286 | 99.00th=[53216], 99.50th=[54264], 99.90th=[54789], 99.95th=[54789], 00:16:14.286 | 99.99th=[67634] 00:16:14.286 bw ( KiB/s): min= 9064, max=11416, per=21.33%, avg=10240.00, stdev=1663.12, samples=2 00:16:14.286 iops : min= 2266, max= 2854, avg=2560.00, stdev=415.78, samples=2 00:16:14.286 lat (msec) : 4=0.02%, 10=0.23%, 20=41.60%, 50=54.88%, 100=3.27% 00:16:14.286 cpu : usr=2.49%, sys=3.88%, ctx=217, majf=0, minf=1 00:16:14.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:14.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:14.286 issued rwts: total=2553,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:14.286 00:16:14.286 Run status group 0 (all jobs): 00:16:14.286 READ: bw=44.1MiB/s (46.2MB/s), 9363KiB/s-16.0MiB/s (9587kB/s-16.8MB/s), io=46.2MiB (48.4MB), run=1001-1048msec 00:16:14.286 WRITE: bw=46.9MiB/s (49.2MB/s), 9771KiB/s-17.1MiB/s (10.0MB/s-17.9MB/s), io=49.1MiB (51.5MB), run=1001-1048msec 00:16:14.286 00:16:14.286 Disk stats (read/write): 00:16:14.286 nvme0n1: ios=2181/2560, merge=0/0, ticks=25490/18334, in_queue=43824, util=96.59% 00:16:14.286 nvme0n2: ios=1892/2048, merge=0/0, ticks=26736/35057, in_queue=61793, util=85.86% 00:16:14.286 nvme0n3: ios=3474/3584, merge=0/0, ticks=25438/20896, in_queue=46334, util=97.16% 00:16:14.286 nvme0n4: ios=2105/2463, merge=0/0, ticks=20895/20449, in_queue=41344, util=97.03% 00:16:14.286 22:07:56 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:14.286 [global] 00:16:14.286 thread=1 00:16:14.286 invalidate=1 00:16:14.286 rw=randwrite 00:16:14.286 time_based=1 00:16:14.286 runtime=1 00:16:14.286 ioengine=libaio 00:16:14.286 direct=1 00:16:14.286 bs=4096 00:16:14.286 iodepth=128 00:16:14.286 norandommap=0 00:16:14.286 numjobs=1 00:16:14.286 00:16:14.286 verify_dump=1 00:16:14.286 verify_backlog=512 00:16:14.286 verify_state_save=0 00:16:14.286 do_verify=1 00:16:14.286 verify=crc32c-intel 00:16:14.286 [job0] 00:16:14.286 filename=/dev/nvme0n1 00:16:14.286 [job1] 00:16:14.286 filename=/dev/nvme0n2 00:16:14.286 [job2] 00:16:14.286 filename=/dev/nvme0n3 00:16:14.286 [job3] 00:16:14.286 filename=/dev/nvme0n4 00:16:14.286 Could not set queue depth (nvme0n1) 00:16:14.286 Could not set queue depth (nvme0n2) 00:16:14.286 Could not set queue depth (nvme0n3) 00:16:14.286 Could not set queue depth (nvme0n4) 00:16:14.286 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:14.286 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:14.286 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:14.286 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:14.286 fio-3.35 00:16:14.286 Starting 4 threads 00:16:15.663 00:16:15.663 job0: (groupid=0, jobs=1): err= 0: pid=3947458: Wed Apr 24 22:07:57 2024 00:16:15.663 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec) 00:16:15.663 slat (usec): min=3, max=17343, avg=149.09, stdev=1060.17 00:16:15.663 clat (usec): min=4750, max=63396, avg=17642.48, stdev=7214.69 00:16:15.663 lat (usec): min=4757, max=63402, avg=17791.57, stdev=7307.38 00:16:15.663 clat percentiles (usec): 00:16:15.663 | 1.00th=[ 7242], 5.00th=[11469], 10.00th=[11863], 20.00th=[11994], 00:16:15.663 | 30.00th=[13829], 40.00th=[14615], 50.00th=[15795], 60.00th=[16909], 00:16:15.663 | 70.00th=[18744], 80.00th=[21627], 90.00th=[26084], 95.00th=[30802], 00:16:15.663 | 99.00th=[47973], 99.50th=[55313], 99.90th=[63177], 99.95th=[63177], 00:16:15.663 | 99.99th=[63177] 00:16:15.663 write: IOPS=3258, BW=12.7MiB/s (13.3MB/s)(12.9MiB/1011msec); 0 zone resets 00:16:15.663 slat (usec): min=4, max=12521, avg=156.47, stdev=878.56 00:16:15.663 clat (usec): min=1722, max=74905, avg=22197.73, stdev=16473.38 00:16:15.663 lat (usec): min=1732, max=74915, avg=22354.20, stdev=16579.48 00:16:15.663 clat percentiles (usec): 00:16:15.663 | 1.00th=[ 5014], 5.00th=[ 7635], 10.00th=[ 9241], 20.00th=[11076], 00:16:15.663 | 30.00th=[12518], 40.00th=[14222], 50.00th=[15533], 60.00th=[17695], 00:16:15.663 | 70.00th=[21627], 80.00th=[32637], 90.00th=[47973], 95.00th=[63177], 00:16:15.663 | 99.00th=[69731], 99.50th=[70779], 99.90th=[74974], 99.95th=[74974], 00:16:15.663 | 99.99th=[74974] 00:16:15.663 bw ( KiB/s): min=11688, max=13648, per=21.86%, avg=12668.00, stdev=1385.93, samples=2 00:16:15.663 iops : min= 2922, max= 3412, avg=3167.00, stdev=346.48, samples=2 00:16:15.663 lat (msec) : 2=0.11%, 4=0.22%, 10=7.85%, 20=61.04%, 50=25.78% 00:16:15.663 lat (msec) : 100=5.00% 00:16:15.663 cpu : usr=2.77%, sys=4.85%, ctx=308, majf=0, minf=1 00:16:15.663 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:16:15.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.663 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.663 issued rwts: total=3072,3294,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.663 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.663 job1: (groupid=0, jobs=1): err= 0: pid=3947459: Wed Apr 24 22:07:57 2024 00:16:15.663 read: IOPS=3257, BW=12.7MiB/s (13.3MB/s)(12.8MiB/1003msec) 00:16:15.663 slat (usec): min=2, max=35780, avg=169.79, stdev=1421.27 00:16:15.663 clat (usec): min=1929, max=82194, avg=20975.90, stdev=12072.29 00:16:15.663 lat (usec): min=1935, max=82222, avg=21145.69, stdev=12173.67 00:16:15.663 clat percentiles (usec): 00:16:15.663 | 1.00th=[ 6259], 5.00th=[ 9896], 10.00th=[10945], 20.00th=[13042], 00:16:15.663 | 30.00th=[13566], 40.00th=[14091], 50.00th=[15008], 60.00th=[17957], 00:16:15.663 | 70.00th=[24249], 80.00th=[28967], 90.00th=[39584], 95.00th=[46400], 00:16:15.663 | 99.00th=[65274], 99.50th=[65274], 99.90th=[65274], 99.95th=[65274], 00:16:15.663 | 99.99th=[82314] 00:16:15.663 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:16:15.663 slat (usec): min=3, max=21325, avg=117.12, stdev=983.01 00:16:15.663 clat (usec): min=6893, max=41351, avg=16302.11, stdev=6466.46 00:16:15.663 lat (usec): min=6901, max=41573, avg=16419.22, stdev=6529.98 00:16:15.663 clat percentiles (usec): 00:16:15.663 | 1.00th=[ 7504], 5.00th=[ 8455], 10.00th=[10290], 20.00th=[12256], 00:16:15.663 | 30.00th=[12518], 40.00th=[13042], 50.00th=[14091], 60.00th=[15401], 00:16:15.663 | 70.00th=[16909], 80.00th=[21103], 90.00th=[25035], 95.00th=[28705], 00:16:15.663 | 99.00th=[36963], 99.50th=[38536], 99.90th=[38536], 99.95th=[40109], 00:16:15.663 | 99.99th=[41157] 00:16:15.663 bw ( KiB/s): min=12800, max=15872, per=24.74%, avg=14336.00, stdev=2172.23, samples=2 00:16:15.663 iops : min= 3200, max= 3968, avg=3584.00, stdev=543.06, samples=2 00:16:15.663 lat (msec) : 2=0.10%, 4=0.15%, 10=7.60%, 20=61.30%, 50=29.65% 00:16:15.663 lat (msec) : 100=1.20% 00:16:15.663 cpu : usr=2.30%, sys=5.29%, ctx=292, majf=0, minf=1 00:16:15.663 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:15.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.663 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.663 issued rwts: total=3267,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.663 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.663 job2: (groupid=0, jobs=1): err= 0: pid=3947462: Wed Apr 24 22:07:57 2024 00:16:15.663 read: IOPS=3257, BW=12.7MiB/s (13.3MB/s)(12.8MiB/1005msec) 00:16:15.663 slat (usec): min=2, max=55987, avg=162.41, stdev=1527.05 00:16:15.663 clat (usec): min=1060, max=79620, avg=19928.90, stdev=12189.88 00:16:15.663 lat (usec): min=5246, max=79625, avg=20091.31, stdev=12275.02 00:16:15.663 clat percentiles (usec): 00:16:15.663 | 1.00th=[ 5473], 5.00th=[ 9241], 10.00th=[10683], 20.00th=[12256], 00:16:15.663 | 30.00th=[13698], 40.00th=[14746], 50.00th=[15664], 60.00th=[16909], 00:16:15.663 | 70.00th=[19268], 80.00th=[27657], 90.00th=[34341], 95.00th=[43779], 00:16:15.663 | 99.00th=[74974], 99.50th=[79168], 99.90th=[79168], 99.95th=[79168], 00:16:15.663 | 99.99th=[79168] 00:16:15.663 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:16:15.663 slat (usec): min=3, max=17891, avg=126.29, stdev=902.68 00:16:15.663 clat (usec): min=7154, max=79478, avg=17267.27, stdev=9724.76 00:16:15.663 lat (usec): min=7159, max=79482, avg=17393.57, stdev=9744.65 00:16:15.663 clat percentiles (usec): 00:16:15.663 | 1.00th=[ 7373], 5.00th=[ 8717], 10.00th=[10552], 20.00th=[12387], 00:16:15.663 | 30.00th=[13304], 40.00th=[13829], 50.00th=[14091], 60.00th=[15795], 00:16:15.663 | 70.00th=[16319], 80.00th=[19792], 90.00th=[27395], 95.00th=[32637], 00:16:15.663 | 99.00th=[66323], 99.50th=[79168], 99.90th=[79168], 99.95th=[79168], 00:16:15.663 | 99.99th=[79168] 00:16:15.663 bw ( KiB/s): min=12720, max=15952, per=24.74%, avg=14336.00, stdev=2285.37, samples=2 00:16:15.663 iops : min= 3180, max= 3988, avg=3584.00, stdev=571.34, samples=2 00:16:15.663 lat (msec) : 2=0.01%, 10=7.66%, 20=69.80%, 50=20.53%, 100=2.00% 00:16:15.663 cpu : usr=2.09%, sys=3.78%, ctx=289, majf=0, minf=1 00:16:15.663 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:15.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.663 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.663 issued rwts: total=3274,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.663 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.663 job3: (groupid=0, jobs=1): err= 0: pid=3947463: Wed Apr 24 22:07:57 2024 00:16:15.663 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:16:15.663 slat (usec): min=2, max=17010, avg=95.18, stdev=864.37 00:16:15.663 clat (usec): min=1871, max=44408, avg=16089.41, stdev=5709.25 00:16:15.663 lat (usec): min=1879, max=51631, avg=16184.60, stdev=5769.62 00:16:15.663 clat percentiles (usec): 00:16:15.663 | 1.00th=[ 6194], 5.00th=[ 7635], 10.00th=[ 8586], 20.00th=[12256], 00:16:15.663 | 30.00th=[13173], 40.00th=[14484], 50.00th=[15533], 60.00th=[16712], 00:16:15.663 | 70.00th=[17957], 80.00th=[19530], 90.00th=[22152], 95.00th=[25560], 00:16:15.664 | 99.00th=[39584], 99.50th=[40109], 99.90th=[44303], 99.95th=[44303], 00:16:15.664 | 99.99th=[44303] 00:16:15.664 write: IOPS=4153, BW=16.2MiB/s (17.0MB/s)(16.3MiB/1007msec); 0 zone resets 00:16:15.664 slat (usec): min=3, max=20440, avg=94.03, stdev=823.13 00:16:15.664 clat (usec): min=602, max=48335, avg=14837.55, stdev=7036.84 00:16:15.664 lat (usec): min=623, max=48340, avg=14931.58, stdev=7060.83 00:16:15.664 clat percentiles (usec): 00:16:15.664 | 1.00th=[ 2769], 5.00th=[ 6915], 10.00th=[ 7242], 20.00th=[ 9503], 00:16:15.664 | 30.00th=[11731], 40.00th=[12911], 50.00th=[13698], 60.00th=[14877], 00:16:15.664 | 70.00th=[16712], 80.00th=[18744], 90.00th=[21627], 95.00th=[29230], 00:16:15.664 | 99.00th=[40633], 99.50th=[46400], 99.90th=[47973], 99.95th=[48497], 00:16:15.664 | 99.99th=[48497] 00:16:15.664 bw ( KiB/s): min=16384, max=16384, per=28.28%, avg=16384.00, stdev= 0.00, samples=2 00:16:15.664 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:16:15.664 lat (usec) : 750=0.04%, 1000=0.04% 00:16:15.664 lat (msec) : 2=0.13%, 4=0.88%, 10=15.57%, 20=66.63%, 50=16.72% 00:16:15.664 cpu : usr=2.98%, sys=4.08%, ctx=292, majf=0, minf=1 00:16:15.664 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:15.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.664 issued rwts: total=4096,4183,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.664 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.664 00:16:15.664 Run status group 0 (all jobs): 00:16:15.664 READ: bw=53.0MiB/s (55.5MB/s), 11.9MiB/s-15.9MiB/s (12.4MB/s-16.7MB/s), io=53.6MiB (56.2MB), run=1003-1011msec 00:16:15.664 WRITE: bw=56.6MiB/s (59.3MB/s), 12.7MiB/s-16.2MiB/s (13.3MB/s-17.0MB/s), io=57.2MiB (60.0MB), run=1003-1011msec 00:16:15.664 00:16:15.664 Disk stats (read/write): 00:16:15.664 nvme0n1: ios=2066/2560, merge=0/0, ticks=37349/61160, in_queue=98509, util=96.09% 00:16:15.664 nvme0n2: ios=2578/2882, merge=0/0, ticks=26354/23725, in_queue=50079, util=89.37% 00:16:15.664 nvme0n3: ios=3104/3138, merge=0/0, ticks=24414/17963, in_queue=42377, util=94.19% 00:16:15.664 nvme0n4: ios=3198/3584, merge=0/0, ticks=44826/42135, in_queue=86961, util=89.00% 00:16:15.664 22:07:57 -- target/fio.sh@55 -- # sync 00:16:15.664 22:07:57 -- target/fio.sh@59 -- # fio_pid=3947601 00:16:15.664 22:07:57 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:15.664 22:07:57 -- target/fio.sh@61 -- # sleep 3 00:16:15.664 [global] 00:16:15.664 thread=1 00:16:15.664 invalidate=1 00:16:15.664 rw=read 00:16:15.664 time_based=1 00:16:15.664 runtime=10 00:16:15.664 ioengine=libaio 00:16:15.664 direct=1 00:16:15.664 bs=4096 00:16:15.664 iodepth=1 00:16:15.664 norandommap=1 00:16:15.664 numjobs=1 00:16:15.664 00:16:15.664 [job0] 00:16:15.664 filename=/dev/nvme0n1 00:16:15.664 [job1] 00:16:15.664 filename=/dev/nvme0n2 00:16:15.664 [job2] 00:16:15.664 filename=/dev/nvme0n3 00:16:15.664 [job3] 00:16:15.664 filename=/dev/nvme0n4 00:16:15.664 Could not set queue depth (nvme0n1) 00:16:15.664 Could not set queue depth (nvme0n2) 00:16:15.664 Could not set queue depth (nvme0n3) 00:16:15.664 Could not set queue depth (nvme0n4) 00:16:15.664 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:15.664 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:15.664 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:15.664 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:15.664 fio-3.35 00:16:15.664 Starting 4 threads 00:16:18.979 22:08:00 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:18.979 22:08:00 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:18.979 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=294912, buflen=4096 00:16:18.979 fio: pid=3947695, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:19.237 22:08:01 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:19.237 22:08:01 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:19.237 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=356352, buflen=4096 00:16:19.237 fio: pid=3947694, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:19.495 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=598016, buflen=4096 00:16:19.495 fio: pid=3947692, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:19.495 22:08:01 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:19.495 22:08:01 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:20.061 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=12734464, buflen=4096 00:16:20.061 fio: pid=3947693, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:16:20.061 22:08:02 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:20.061 22:08:02 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:20.061 00:16:20.061 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3947692: Wed Apr 24 22:08:02 2024 00:16:20.061 read: IOPS=40, BW=162KiB/s (166kB/s)(584KiB/3610msec) 00:16:20.061 slat (nsec): min=7471, max=26939, avg=12274.16, stdev=3647.32 00:16:20.061 clat (usec): min=363, max=42009, avg=24705.38, stdev=19894.57 00:16:20.061 lat (usec): min=372, max=42023, avg=24717.62, stdev=19896.89 00:16:20.061 clat percentiles (usec): 00:16:20.061 | 1.00th=[ 367], 5.00th=[ 388], 10.00th=[ 396], 20.00th=[ 416], 00:16:20.061 | 30.00th=[ 441], 40.00th=[14222], 50.00th=[41157], 60.00th=[41157], 00:16:20.061 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:20.061 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:20.061 | 99.99th=[42206] 00:16:20.061 bw ( KiB/s): min= 95, max= 336, per=4.74%, avg=163.29, stdev=97.10, samples=7 00:16:20.061 iops : min= 23, max= 84, avg=40.71, stdev=24.36, samples=7 00:16:20.061 lat (usec) : 500=38.10%, 750=0.68% 00:16:20.061 lat (msec) : 2=0.68%, 20=0.68%, 50=59.18% 00:16:20.061 cpu : usr=0.00%, sys=0.11%, ctx=147, majf=0, minf=1 00:16:20.061 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.061 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.061 issued rwts: total=147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.061 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:20.061 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3947693: Wed Apr 24 22:08:02 2024 00:16:20.062 read: IOPS=783, BW=3135KiB/s (3210kB/s)(12.1MiB/3967msec) 00:16:20.062 slat (usec): min=4, max=1821, avg=13.07, stdev=32.83 00:16:20.062 clat (usec): min=249, max=41131, avg=1259.56, stdev=6073.59 00:16:20.062 lat (usec): min=254, max=42916, avg=1272.62, stdev=6078.06 00:16:20.062 clat percentiles (usec): 00:16:20.062 | 1.00th=[ 265], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 297], 00:16:20.062 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 330], 00:16:20.062 | 70.00th=[ 347], 80.00th=[ 363], 90.00th=[ 396], 95.00th=[ 445], 00:16:20.062 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:20.062 | 99.99th=[41157] 00:16:20.062 bw ( KiB/s): min= 95, max=11816, per=100.00%, avg=3539.29, stdev=5506.19, samples=7 00:16:20.062 iops : min= 23, max= 2954, avg=884.71, stdev=1376.62, samples=7 00:16:20.062 lat (usec) : 250=0.03%, 500=96.88%, 750=0.61% 00:16:20.062 lat (msec) : 2=0.16%, 50=2.28% 00:16:20.062 cpu : usr=0.35%, sys=1.21%, ctx=3112, majf=0, minf=1 00:16:20.062 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.062 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.062 issued rwts: total=3110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.062 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:20.062 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3947694: Wed Apr 24 22:08:02 2024 00:16:20.062 read: IOPS=26, BW=104KiB/s (106kB/s)(348KiB/3351msec) 00:16:20.062 slat (usec): min=10, max=16830, avg=206.13, stdev=1792.49 00:16:20.062 clat (usec): min=538, max=42007, avg=38291.84, stdev=10302.90 00:16:20.062 lat (usec): min=552, max=42021, avg=38306.89, stdev=10302.54 00:16:20.062 clat percentiles (usec): 00:16:20.062 | 1.00th=[ 537], 5.00th=[ 693], 10.00th=[40633], 20.00th=[41157], 00:16:20.062 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:20.062 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:16:20.062 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:20.062 | 99.99th=[42206] 00:16:20.062 bw ( KiB/s): min= 96, max= 128, per=3.02%, avg=104.00, stdev=12.39, samples=6 00:16:20.062 iops : min= 24, max= 32, avg=26.00, stdev= 3.10, samples=6 00:16:20.062 lat (usec) : 750=5.68%, 1000=1.14% 00:16:20.062 lat (msec) : 50=92.05% 00:16:20.062 cpu : usr=0.09%, sys=0.00%, ctx=89, majf=0, minf=1 00:16:20.062 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.062 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.062 issued rwts: total=88,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.062 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:20.062 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3947695: Wed Apr 24 22:08:02 2024 00:16:20.062 read: IOPS=25, BW=100KiB/s (103kB/s)(288KiB/2872msec) 00:16:20.062 slat (nsec): min=14474, max=34729, avg=17197.19, stdev=2700.40 00:16:20.062 clat (usec): min=413, max=41286, avg=39853.13, stdev=6705.25 00:16:20.062 lat (usec): min=438, max=41302, avg=39870.28, stdev=6704.17 00:16:20.062 clat percentiles (usec): 00:16:20.062 | 1.00th=[ 412], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:16:20.062 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:20.062 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:20.062 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:20.062 | 99.99th=[41157] 00:16:20.062 bw ( KiB/s): min= 96, max= 104, per=2.90%, avg=100.80, stdev= 4.38, samples=5 00:16:20.062 iops : min= 24, max= 26, avg=25.20, stdev= 1.10, samples=5 00:16:20.062 lat (usec) : 500=1.37%, 750=1.37% 00:16:20.062 lat (msec) : 50=95.89% 00:16:20.062 cpu : usr=0.07%, sys=0.00%, ctx=74, majf=0, minf=1 00:16:20.062 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.062 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.062 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.062 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:20.062 00:16:20.062 Run status group 0 (all jobs): 00:16:20.062 READ: bw=3442KiB/s (3525kB/s), 100KiB/s-3135KiB/s (103kB/s-3210kB/s), io=13.3MiB (14.0MB), run=2872-3967msec 00:16:20.062 00:16:20.062 Disk stats (read/write): 00:16:20.062 nvme0n1: ios=165/0, merge=0/0, ticks=3668/0, in_queue=3668, util=97.44% 00:16:20.062 nvme0n2: ios=3105/0, merge=0/0, ticks=3738/0, in_queue=3738, util=96.53% 00:16:20.062 nvme0n3: ios=87/0, merge=0/0, ticks=3333/0, in_queue=3333, util=96.75% 00:16:20.062 nvme0n4: ios=116/0, merge=0/0, ticks=3127/0, in_queue=3127, util=99.49% 00:16:20.320 22:08:02 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:20.320 22:08:02 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:20.577 22:08:02 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:20.577 22:08:02 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:21.143 22:08:03 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:21.143 22:08:03 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:21.402 22:08:03 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:21.402 22:08:03 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:21.660 22:08:03 -- target/fio.sh@69 -- # fio_status=0 00:16:21.660 22:08:03 -- target/fio.sh@70 -- # wait 3947601 00:16:21.660 22:08:03 -- target/fio.sh@70 -- # fio_status=4 00:16:21.660 22:08:03 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:21.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.660 22:08:03 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:21.660 22:08:03 -- common/autotest_common.sh@1205 -- # local i=0 00:16:21.660 22:08:03 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:21.660 22:08:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:21.660 22:08:03 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:21.660 22:08:03 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:21.660 22:08:03 -- common/autotest_common.sh@1217 -- # return 0 00:16:21.660 22:08:03 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:21.660 22:08:03 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:21.660 nvmf hotplug test: fio failed as expected 00:16:21.660 22:08:03 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.225 22:08:04 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:22.225 22:08:04 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:22.225 22:08:04 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:22.225 22:08:04 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:22.225 22:08:04 -- target/fio.sh@91 -- # nvmftestfini 00:16:22.225 22:08:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:22.225 22:08:04 -- nvmf/common.sh@117 -- # sync 00:16:22.225 22:08:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:22.225 22:08:04 -- nvmf/common.sh@120 -- # set +e 00:16:22.225 22:08:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:22.225 22:08:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:22.225 rmmod nvme_tcp 00:16:22.225 rmmod nvme_fabrics 00:16:22.225 rmmod nvme_keyring 00:16:22.225 22:08:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:22.225 22:08:04 -- nvmf/common.sh@124 -- # set -e 00:16:22.225 22:08:04 -- nvmf/common.sh@125 -- # return 0 00:16:22.225 22:08:04 -- nvmf/common.sh@478 -- # '[' -n 3945444 ']' 00:16:22.225 22:08:04 -- nvmf/common.sh@479 -- # killprocess 3945444 00:16:22.225 22:08:04 -- common/autotest_common.sh@936 -- # '[' -z 3945444 ']' 00:16:22.225 22:08:04 -- common/autotest_common.sh@940 -- # kill -0 3945444 00:16:22.225 22:08:04 -- common/autotest_common.sh@941 -- # uname 00:16:22.225 22:08:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:22.225 22:08:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3945444 00:16:22.225 22:08:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:22.225 22:08:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:22.225 22:08:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3945444' 00:16:22.225 killing process with pid 3945444 00:16:22.225 22:08:04 -- common/autotest_common.sh@955 -- # kill 3945444 00:16:22.225 [2024-04-24 22:08:04.425804] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:22.225 22:08:04 -- common/autotest_common.sh@960 -- # wait 3945444 00:16:22.483 22:08:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:22.483 22:08:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:22.483 22:08:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:22.483 22:08:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:22.483 22:08:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:22.483 22:08:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.483 22:08:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.483 22:08:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.016 22:08:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:25.016 00:16:25.016 real 0m25.574s 00:16:25.016 user 1m31.693s 00:16:25.016 sys 0m6.271s 00:16:25.016 22:08:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:25.016 22:08:06 -- common/autotest_common.sh@10 -- # set +x 00:16:25.016 ************************************ 00:16:25.016 END TEST nvmf_fio_target 00:16:25.016 ************************************ 00:16:25.016 22:08:06 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:25.016 22:08:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:25.016 22:08:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:25.016 22:08:06 -- common/autotest_common.sh@10 -- # set +x 00:16:25.016 ************************************ 00:16:25.016 START TEST nvmf_bdevio 00:16:25.016 ************************************ 00:16:25.016 22:08:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:25.016 * Looking for test storage... 00:16:25.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:25.016 22:08:06 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:25.016 22:08:06 -- nvmf/common.sh@7 -- # uname -s 00:16:25.016 22:08:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.016 22:08:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.016 22:08:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.016 22:08:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.016 22:08:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.016 22:08:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.016 22:08:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.016 22:08:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.016 22:08:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.016 22:08:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.016 22:08:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:25.016 22:08:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:25.016 22:08:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.016 22:08:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.016 22:08:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:25.016 22:08:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:25.016 22:08:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:25.016 22:08:06 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.016 22:08:06 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.016 22:08:06 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.016 22:08:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.016 22:08:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.016 22:08:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.016 22:08:06 -- paths/export.sh@5 -- # export PATH 00:16:25.016 22:08:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.016 22:08:06 -- nvmf/common.sh@47 -- # : 0 00:16:25.016 22:08:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:25.016 22:08:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:25.016 22:08:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:25.016 22:08:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.016 22:08:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.016 22:08:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:25.016 22:08:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:25.016 22:08:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:25.016 22:08:06 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:25.016 22:08:06 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:25.016 22:08:06 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:25.016 22:08:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:25.016 22:08:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.016 22:08:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:25.016 22:08:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:25.016 22:08:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:25.016 22:08:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.016 22:08:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.016 22:08:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.016 22:08:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:25.016 22:08:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:25.016 22:08:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:25.016 22:08:06 -- common/autotest_common.sh@10 -- # set +x 00:16:27.549 22:08:09 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:27.549 22:08:09 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:27.549 22:08:09 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:27.549 22:08:09 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:27.549 22:08:09 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:27.549 22:08:09 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:27.549 22:08:09 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:27.549 22:08:09 -- nvmf/common.sh@295 -- # net_devs=() 00:16:27.549 22:08:09 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:27.549 22:08:09 -- nvmf/common.sh@296 -- # e810=() 00:16:27.549 22:08:09 -- nvmf/common.sh@296 -- # local -ga e810 00:16:27.549 22:08:09 -- nvmf/common.sh@297 -- # x722=() 00:16:27.549 22:08:09 -- nvmf/common.sh@297 -- # local -ga x722 00:16:27.549 22:08:09 -- nvmf/common.sh@298 -- # mlx=() 00:16:27.549 22:08:09 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:27.549 22:08:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:27.549 22:08:09 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:27.549 22:08:09 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:27.549 22:08:09 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:27.549 22:08:09 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:27.549 22:08:09 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:27.549 22:08:09 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:27.549 22:08:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:27.549 22:08:09 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:27.549 22:08:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:27.549 22:08:09 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:27.549 22:08:09 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:27.549 22:08:09 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:27.549 22:08:09 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:27.549 22:08:09 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:27.549 22:08:09 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:27.549 22:08:09 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:27.549 22:08:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:27.549 22:08:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:27.549 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:27.549 22:08:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:27.549 22:08:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:27.549 22:08:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:27.549 22:08:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:27.549 22:08:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:27.549 22:08:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:27.549 22:08:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:27.549 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:27.549 22:08:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:27.549 22:08:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:27.549 22:08:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:27.549 22:08:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:27.549 22:08:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:27.549 22:08:09 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:27.549 22:08:09 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:27.549 22:08:09 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:27.549 22:08:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:27.549 22:08:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:27.549 22:08:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:27.549 22:08:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:27.549 22:08:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:27.549 Found net devices under 0000:84:00.0: cvl_0_0 00:16:27.549 22:08:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:27.549 22:08:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:27.549 22:08:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:27.549 22:08:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:27.549 22:08:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:27.549 22:08:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:27.549 Found net devices under 0000:84:00.1: cvl_0_1 00:16:27.549 22:08:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:27.549 22:08:09 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:27.549 22:08:09 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:27.549 22:08:09 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:27.549 22:08:09 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:27.549 22:08:09 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:27.549 22:08:09 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:27.549 22:08:09 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:27.549 22:08:09 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:27.549 22:08:09 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:27.549 22:08:09 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:27.549 22:08:09 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:27.550 22:08:09 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:27.550 22:08:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:27.550 22:08:09 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:27.550 22:08:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:27.550 22:08:09 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:27.550 22:08:09 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:27.550 22:08:09 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:27.550 22:08:09 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:27.550 22:08:09 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:27.550 22:08:09 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:27.550 22:08:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:27.550 22:08:09 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:27.550 22:08:09 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:27.550 22:08:09 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:27.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:27.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:16:27.550 00:16:27.550 --- 10.0.0.2 ping statistics --- 00:16:27.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.550 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:16:27.550 22:08:09 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:27.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:27.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:16:27.550 00:16:27.550 --- 10.0.0.1 ping statistics --- 00:16:27.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.550 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:16:27.550 22:08:09 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:27.550 22:08:09 -- nvmf/common.sh@411 -- # return 0 00:16:27.550 22:08:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:27.550 22:08:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:27.550 22:08:09 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:27.550 22:08:09 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:27.550 22:08:09 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:27.550 22:08:09 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:27.550 22:08:09 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:27.550 22:08:09 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:27.550 22:08:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:27.550 22:08:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:27.550 22:08:09 -- common/autotest_common.sh@10 -- # set +x 00:16:27.550 22:08:09 -- nvmf/common.sh@470 -- # nvmfpid=3950480 00:16:27.550 22:08:09 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:27.550 22:08:09 -- nvmf/common.sh@471 -- # waitforlisten 3950480 00:16:27.550 22:08:09 -- common/autotest_common.sh@817 -- # '[' -z 3950480 ']' 00:16:27.550 22:08:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.550 22:08:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:27.550 22:08:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.550 22:08:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:27.550 22:08:09 -- common/autotest_common.sh@10 -- # set +x 00:16:27.550 [2024-04-24 22:08:09.526894] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:16:27.550 [2024-04-24 22:08:09.526982] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.550 EAL: No free 2048 kB hugepages reported on node 1 00:16:27.550 [2024-04-24 22:08:09.606578] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:27.550 [2024-04-24 22:08:09.731414] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.550 [2024-04-24 22:08:09.731489] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.550 [2024-04-24 22:08:09.731515] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.550 [2024-04-24 22:08:09.731529] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.550 [2024-04-24 22:08:09.731541] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.550 [2024-04-24 22:08:09.731638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:27.550 [2024-04-24 22:08:09.731695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:27.550 [2024-04-24 22:08:09.731750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:27.550 [2024-04-24 22:08:09.731753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:27.809 22:08:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:27.809 22:08:09 -- common/autotest_common.sh@850 -- # return 0 00:16:27.809 22:08:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:27.809 22:08:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:27.809 22:08:09 -- common/autotest_common.sh@10 -- # set +x 00:16:27.809 22:08:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:27.809 22:08:09 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:27.809 22:08:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.809 22:08:09 -- common/autotest_common.sh@10 -- # set +x 00:16:27.809 [2024-04-24 22:08:09.906442] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:27.809 22:08:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.809 22:08:09 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:27.809 22:08:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.809 22:08:09 -- common/autotest_common.sh@10 -- # set +x 00:16:27.809 Malloc0 00:16:27.809 22:08:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.809 22:08:09 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:27.809 22:08:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.809 22:08:09 -- common/autotest_common.sh@10 -- # set +x 00:16:27.809 22:08:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.809 22:08:09 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:27.809 22:08:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.809 22:08:09 -- common/autotest_common.sh@10 -- # set +x 00:16:27.809 22:08:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.809 22:08:09 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.809 22:08:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.809 22:08:09 -- common/autotest_common.sh@10 -- # set +x 00:16:27.809 [2024-04-24 22:08:09.960716] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:27.809 [2024-04-24 22:08:09.961077] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.809 22:08:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.809 22:08:09 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:27.809 22:08:09 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:27.809 22:08:09 -- nvmf/common.sh@521 -- # config=() 00:16:27.809 22:08:09 -- nvmf/common.sh@521 -- # local subsystem config 00:16:27.809 22:08:09 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:27.809 22:08:09 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:27.809 { 00:16:27.809 "params": { 00:16:27.809 "name": "Nvme$subsystem", 00:16:27.809 "trtype": "$TEST_TRANSPORT", 00:16:27.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:27.809 "adrfam": "ipv4", 00:16:27.809 "trsvcid": "$NVMF_PORT", 00:16:27.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:27.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:27.809 "hdgst": ${hdgst:-false}, 00:16:27.809 "ddgst": ${ddgst:-false} 00:16:27.810 }, 00:16:27.810 "method": "bdev_nvme_attach_controller" 00:16:27.810 } 00:16:27.810 EOF 00:16:27.810 )") 00:16:27.810 22:08:09 -- nvmf/common.sh@543 -- # cat 00:16:27.810 22:08:09 -- nvmf/common.sh@545 -- # jq . 00:16:27.810 22:08:09 -- nvmf/common.sh@546 -- # IFS=, 00:16:27.810 22:08:09 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:27.810 "params": { 00:16:27.810 "name": "Nvme1", 00:16:27.810 "trtype": "tcp", 00:16:27.810 "traddr": "10.0.0.2", 00:16:27.810 "adrfam": "ipv4", 00:16:27.810 "trsvcid": "4420", 00:16:27.810 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.810 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:27.810 "hdgst": false, 00:16:27.810 "ddgst": false 00:16:27.810 }, 00:16:27.810 "method": "bdev_nvme_attach_controller" 00:16:27.810 }' 00:16:27.810 [2024-04-24 22:08:10.010580] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:16:27.810 [2024-04-24 22:08:10.010674] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3950505 ] 00:16:27.810 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.068 [2024-04-24 22:08:10.082875] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:28.068 [2024-04-24 22:08:10.205489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.068 [2024-04-24 22:08:10.205550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.068 [2024-04-24 22:08:10.205554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.325 I/O targets: 00:16:28.325 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:28.325 00:16:28.325 00:16:28.325 CUnit - A unit testing framework for C - Version 2.1-3 00:16:28.325 http://cunit.sourceforge.net/ 00:16:28.325 00:16:28.325 00:16:28.325 Suite: bdevio tests on: Nvme1n1 00:16:28.325 Test: blockdev write read block ...passed 00:16:28.325 Test: blockdev write zeroes read block ...passed 00:16:28.325 Test: blockdev write zeroes read no split ...passed 00:16:28.325 Test: blockdev write zeroes read split ...passed 00:16:28.325 Test: blockdev write zeroes read split partial ...passed 00:16:28.325 Test: blockdev reset ...[2024-04-24 22:08:10.571349] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:28.325 [2024-04-24 22:08:10.571485] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x141ef40 (9): Bad file descriptor 00:16:28.583 [2024-04-24 22:08:10.583429] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:28.583 passed 00:16:28.583 Test: blockdev write read 8 blocks ...passed 00:16:28.583 Test: blockdev write read size > 128k ...passed 00:16:28.583 Test: blockdev write read invalid size ...passed 00:16:28.583 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:28.583 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:28.583 Test: blockdev write read max offset ...passed 00:16:28.583 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:28.583 Test: blockdev writev readv 8 blocks ...passed 00:16:28.583 Test: blockdev writev readv 30 x 1block ...passed 00:16:28.583 Test: blockdev writev readv block ...passed 00:16:28.583 Test: blockdev writev readv size > 128k ...passed 00:16:28.583 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:28.583 Test: blockdev comparev and writev ...[2024-04-24 22:08:10.798195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.583 [2024-04-24 22:08:10.798235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:28.583 [2024-04-24 22:08:10.798262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.583 [2024-04-24 22:08:10.798281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.583 [2024-04-24 22:08:10.798741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.583 [2024-04-24 22:08:10.798769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:28.583 [2024-04-24 22:08:10.798794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.583 [2024-04-24 22:08:10.798812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:28.583 [2024-04-24 22:08:10.799248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.583 [2024-04-24 22:08:10.799275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:28.583 [2024-04-24 22:08:10.799311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.583 [2024-04-24 22:08:10.799331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:28.583 [2024-04-24 22:08:10.799770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.583 [2024-04-24 22:08:10.799798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:28.583 [2024-04-24 22:08:10.799822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.583 [2024-04-24 22:08:10.799839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:28.841 passed 00:16:28.841 Test: blockdev nvme passthru rw ...passed 00:16:28.841 Test: blockdev nvme passthru vendor specific ...[2024-04-24 22:08:10.881841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:28.841 [2024-04-24 22:08:10.881876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:28.841 [2024-04-24 22:08:10.882117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:28.841 [2024-04-24 22:08:10.882155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:28.841 [2024-04-24 22:08:10.882409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:28.841 [2024-04-24 22:08:10.882441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:28.841 [2024-04-24 22:08:10.882667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:28.841 [2024-04-24 22:08:10.882693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:28.841 passed 00:16:28.841 Test: blockdev nvme admin passthru ...passed 00:16:28.841 Test: blockdev copy ...passed 00:16:28.841 00:16:28.841 Run Summary: Type Total Ran Passed Failed Inactive 00:16:28.841 suites 1 1 n/a 0 0 00:16:28.841 tests 23 23 23 0 0 00:16:28.841 asserts 152 152 152 0 n/a 00:16:28.841 00:16:28.841 Elapsed time = 1.099 seconds 00:16:29.100 22:08:11 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:29.100 22:08:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:29.100 22:08:11 -- common/autotest_common.sh@10 -- # set +x 00:16:29.100 22:08:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:29.100 22:08:11 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:29.100 22:08:11 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:29.100 22:08:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:29.100 22:08:11 -- nvmf/common.sh@117 -- # sync 00:16:29.100 22:08:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:29.100 22:08:11 -- nvmf/common.sh@120 -- # set +e 00:16:29.100 22:08:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:29.100 22:08:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:29.100 rmmod nvme_tcp 00:16:29.100 rmmod nvme_fabrics 00:16:29.100 rmmod nvme_keyring 00:16:29.100 22:08:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:29.100 22:08:11 -- nvmf/common.sh@124 -- # set -e 00:16:29.100 22:08:11 -- nvmf/common.sh@125 -- # return 0 00:16:29.100 22:08:11 -- nvmf/common.sh@478 -- # '[' -n 3950480 ']' 00:16:29.100 22:08:11 -- nvmf/common.sh@479 -- # killprocess 3950480 00:16:29.100 22:08:11 -- common/autotest_common.sh@936 -- # '[' -z 3950480 ']' 00:16:29.100 22:08:11 -- common/autotest_common.sh@940 -- # kill -0 3950480 00:16:29.100 22:08:11 -- common/autotest_common.sh@941 -- # uname 00:16:29.100 22:08:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:29.100 22:08:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3950480 00:16:29.100 22:08:11 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:29.100 22:08:11 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:29.100 22:08:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3950480' 00:16:29.100 killing process with pid 3950480 00:16:29.100 22:08:11 -- common/autotest_common.sh@955 -- # kill 3950480 00:16:29.100 [2024-04-24 22:08:11.280179] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:29.100 22:08:11 -- common/autotest_common.sh@960 -- # wait 3950480 00:16:29.359 22:08:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:29.359 22:08:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:29.359 22:08:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:29.359 22:08:11 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:29.359 22:08:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:29.359 22:08:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.359 22:08:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.359 22:08:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.892 22:08:13 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:31.892 00:16:31.892 real 0m6.749s 00:16:31.892 user 0m10.121s 00:16:31.892 sys 0m2.471s 00:16:31.892 22:08:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:31.892 22:08:13 -- common/autotest_common.sh@10 -- # set +x 00:16:31.892 ************************************ 00:16:31.892 END TEST nvmf_bdevio 00:16:31.892 ************************************ 00:16:31.892 22:08:13 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:16:31.892 22:08:13 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:31.892 22:08:13 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:31.892 22:08:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:31.892 22:08:13 -- common/autotest_common.sh@10 -- # set +x 00:16:31.892 ************************************ 00:16:31.892 START TEST nvmf_bdevio_no_huge 00:16:31.892 ************************************ 00:16:31.892 22:08:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:31.892 * Looking for test storage... 00:16:31.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:31.892 22:08:13 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:31.892 22:08:13 -- nvmf/common.sh@7 -- # uname -s 00:16:31.892 22:08:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.892 22:08:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.892 22:08:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.892 22:08:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.892 22:08:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.892 22:08:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.892 22:08:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.892 22:08:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.892 22:08:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.892 22:08:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.892 22:08:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:31.892 22:08:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:31.893 22:08:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.893 22:08:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.893 22:08:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:31.893 22:08:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:31.893 22:08:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:31.893 22:08:13 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.893 22:08:13 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.893 22:08:13 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.893 22:08:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.893 22:08:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.893 22:08:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.893 22:08:13 -- paths/export.sh@5 -- # export PATH 00:16:31.893 22:08:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.893 22:08:13 -- nvmf/common.sh@47 -- # : 0 00:16:31.893 22:08:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:31.893 22:08:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:31.893 22:08:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:31.893 22:08:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.893 22:08:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.893 22:08:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:31.893 22:08:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:31.893 22:08:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:31.893 22:08:13 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:31.893 22:08:13 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:31.893 22:08:13 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:31.893 22:08:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:31.893 22:08:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:31.893 22:08:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:31.893 22:08:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:31.893 22:08:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:31.893 22:08:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.893 22:08:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:31.893 22:08:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.893 22:08:13 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:31.893 22:08:13 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:31.893 22:08:13 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:31.893 22:08:13 -- common/autotest_common.sh@10 -- # set +x 00:16:34.423 22:08:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:34.423 22:08:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:34.423 22:08:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:34.423 22:08:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:34.423 22:08:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:34.423 22:08:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:34.423 22:08:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:34.423 22:08:16 -- nvmf/common.sh@295 -- # net_devs=() 00:16:34.423 22:08:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:34.423 22:08:16 -- nvmf/common.sh@296 -- # e810=() 00:16:34.423 22:08:16 -- nvmf/common.sh@296 -- # local -ga e810 00:16:34.423 22:08:16 -- nvmf/common.sh@297 -- # x722=() 00:16:34.423 22:08:16 -- nvmf/common.sh@297 -- # local -ga x722 00:16:34.423 22:08:16 -- nvmf/common.sh@298 -- # mlx=() 00:16:34.423 22:08:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:34.423 22:08:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:34.423 22:08:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:34.423 22:08:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:34.423 22:08:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:34.423 22:08:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:34.423 22:08:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:34.423 22:08:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:34.423 22:08:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:34.423 22:08:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:34.423 22:08:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:34.423 22:08:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:34.423 22:08:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:34.423 22:08:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:34.423 22:08:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:34.423 22:08:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:34.423 22:08:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:34.423 22:08:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:34.423 22:08:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:34.423 22:08:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:34.423 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:34.423 22:08:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:34.423 22:08:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:34.423 22:08:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.423 22:08:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.423 22:08:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:34.423 22:08:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:34.423 22:08:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:34.423 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:34.423 22:08:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:34.423 22:08:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:34.423 22:08:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.423 22:08:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.423 22:08:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:34.423 22:08:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:34.423 22:08:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:34.423 22:08:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:34.423 22:08:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:34.423 22:08:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.423 22:08:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:34.423 22:08:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.423 22:08:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:34.423 Found net devices under 0000:84:00.0: cvl_0_0 00:16:34.423 22:08:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.423 22:08:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:34.423 22:08:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.423 22:08:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:34.423 22:08:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.423 22:08:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:34.423 Found net devices under 0000:84:00.1: cvl_0_1 00:16:34.423 22:08:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.423 22:08:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:34.423 22:08:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:34.423 22:08:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:34.423 22:08:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:34.423 22:08:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:34.423 22:08:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.423 22:08:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:34.423 22:08:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:34.423 22:08:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:34.423 22:08:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:34.423 22:08:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:34.423 22:08:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:34.423 22:08:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:34.423 22:08:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.423 22:08:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:34.423 22:08:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:34.423 22:08:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:34.423 22:08:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:34.423 22:08:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:34.423 22:08:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:34.423 22:08:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:34.423 22:08:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:34.423 22:08:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:34.423 22:08:16 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:34.423 22:08:16 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:34.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:16:34.423 00:16:34.423 --- 10.0.0.2 ping statistics --- 00:16:34.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.423 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:16:34.423 22:08:16 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:34.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:16:34.423 00:16:34.423 --- 10.0.0.1 ping statistics --- 00:16:34.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.423 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:16:34.423 22:08:16 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.423 22:08:16 -- nvmf/common.sh@411 -- # return 0 00:16:34.423 22:08:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:34.423 22:08:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.423 22:08:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:34.423 22:08:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:34.423 22:08:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.423 22:08:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:34.423 22:08:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:34.423 22:08:16 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:34.423 22:08:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:34.423 22:08:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:34.423 22:08:16 -- common/autotest_common.sh@10 -- # set +x 00:16:34.423 22:08:16 -- nvmf/common.sh@470 -- # nvmfpid=3952713 00:16:34.423 22:08:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:34.423 22:08:16 -- nvmf/common.sh@471 -- # waitforlisten 3952713 00:16:34.423 22:08:16 -- common/autotest_common.sh@817 -- # '[' -z 3952713 ']' 00:16:34.423 22:08:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.423 22:08:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:34.423 22:08:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.423 22:08:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:34.423 22:08:16 -- common/autotest_common.sh@10 -- # set +x 00:16:34.423 [2024-04-24 22:08:16.262821] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:16:34.423 [2024-04-24 22:08:16.262912] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:34.423 [2024-04-24 22:08:16.350507] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:34.423 [2024-04-24 22:08:16.473066] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.423 [2024-04-24 22:08:16.473138] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.423 [2024-04-24 22:08:16.473155] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:34.423 [2024-04-24 22:08:16.473174] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:34.423 [2024-04-24 22:08:16.473187] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.423 [2024-04-24 22:08:16.473284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:34.423 [2024-04-24 22:08:16.473338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:34.423 [2024-04-24 22:08:16.473412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:34.423 [2024-04-24 22:08:16.473416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.682 22:08:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:34.682 22:08:16 -- common/autotest_common.sh@850 -- # return 0 00:16:34.682 22:08:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:34.682 22:08:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:34.682 22:08:16 -- common/autotest_common.sh@10 -- # set +x 00:16:34.682 22:08:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.682 22:08:16 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:34.682 22:08:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.682 22:08:16 -- common/autotest_common.sh@10 -- # set +x 00:16:34.682 [2024-04-24 22:08:16.829329] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:34.682 22:08:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.682 22:08:16 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:34.682 22:08:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.682 22:08:16 -- common/autotest_common.sh@10 -- # set +x 00:16:34.682 Malloc0 00:16:34.682 22:08:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.682 22:08:16 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:34.682 22:08:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.682 22:08:16 -- common/autotest_common.sh@10 -- # set +x 00:16:34.682 22:08:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.682 22:08:16 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:34.682 22:08:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.682 22:08:16 -- common/autotest_common.sh@10 -- # set +x 00:16:34.682 22:08:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.682 22:08:16 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:34.682 22:08:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.682 22:08:16 -- common/autotest_common.sh@10 -- # set +x 00:16:34.682 [2024-04-24 22:08:16.869457] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:34.682 [2024-04-24 22:08:16.869779] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.682 22:08:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.682 22:08:16 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:34.682 22:08:16 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:34.682 22:08:16 -- nvmf/common.sh@521 -- # config=() 00:16:34.682 22:08:16 -- nvmf/common.sh@521 -- # local subsystem config 00:16:34.682 22:08:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:34.682 22:08:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:34.682 { 00:16:34.682 "params": { 00:16:34.682 "name": "Nvme$subsystem", 00:16:34.682 "trtype": "$TEST_TRANSPORT", 00:16:34.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:34.682 "adrfam": "ipv4", 00:16:34.682 "trsvcid": "$NVMF_PORT", 00:16:34.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:34.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:34.682 "hdgst": ${hdgst:-false}, 00:16:34.682 "ddgst": ${ddgst:-false} 00:16:34.682 }, 00:16:34.682 "method": "bdev_nvme_attach_controller" 00:16:34.682 } 00:16:34.682 EOF 00:16:34.682 )") 00:16:34.682 22:08:16 -- nvmf/common.sh@543 -- # cat 00:16:34.682 22:08:16 -- nvmf/common.sh@545 -- # jq . 00:16:34.682 22:08:16 -- nvmf/common.sh@546 -- # IFS=, 00:16:34.682 22:08:16 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:34.682 "params": { 00:16:34.682 "name": "Nvme1", 00:16:34.682 "trtype": "tcp", 00:16:34.682 "traddr": "10.0.0.2", 00:16:34.682 "adrfam": "ipv4", 00:16:34.682 "trsvcid": "4420", 00:16:34.682 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:34.682 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:34.682 "hdgst": false, 00:16:34.682 "ddgst": false 00:16:34.682 }, 00:16:34.682 "method": "bdev_nvme_attach_controller" 00:16:34.682 }' 00:16:34.682 [2024-04-24 22:08:16.921457] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:16:34.682 [2024-04-24 22:08:16.921545] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3952749 ] 00:16:34.940 [2024-04-24 22:08:17.001738] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:34.940 [2024-04-24 22:08:17.127082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.940 [2024-04-24 22:08:17.127141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.940 [2024-04-24 22:08:17.127146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.198 I/O targets: 00:16:35.198 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:35.198 00:16:35.198 00:16:35.198 CUnit - A unit testing framework for C - Version 2.1-3 00:16:35.198 http://cunit.sourceforge.net/ 00:16:35.198 00:16:35.198 00:16:35.198 Suite: bdevio tests on: Nvme1n1 00:16:35.198 Test: blockdev write read block ...passed 00:16:35.198 Test: blockdev write zeroes read block ...passed 00:16:35.198 Test: blockdev write zeroes read no split ...passed 00:16:35.457 Test: blockdev write zeroes read split ...passed 00:16:35.457 Test: blockdev write zeroes read split partial ...passed 00:16:35.457 Test: blockdev reset ...[2024-04-24 22:08:17.516223] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:35.457 [2024-04-24 22:08:17.516352] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6bcbd0 (9): Bad file descriptor 00:16:35.457 [2024-04-24 22:08:17.532663] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:35.457 passed 00:16:35.457 Test: blockdev write read 8 blocks ...passed 00:16:35.457 Test: blockdev write read size > 128k ...passed 00:16:35.457 Test: blockdev write read invalid size ...passed 00:16:35.457 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:35.457 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:35.457 Test: blockdev write read max offset ...passed 00:16:35.457 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:35.457 Test: blockdev writev readv 8 blocks ...passed 00:16:35.457 Test: blockdev writev readv 30 x 1block ...passed 00:16:35.715 Test: blockdev writev readv block ...passed 00:16:35.715 Test: blockdev writev readv size > 128k ...passed 00:16:35.715 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:35.715 Test: blockdev comparev and writev ...[2024-04-24 22:08:17.750074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.715 [2024-04-24 22:08:17.750115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:35.715 [2024-04-24 22:08:17.750141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.715 [2024-04-24 22:08:17.750160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:35.715 [2024-04-24 22:08:17.750591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.715 [2024-04-24 22:08:17.750619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:35.715 [2024-04-24 22:08:17.750644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.715 [2024-04-24 22:08:17.750662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:35.715 [2024-04-24 22:08:17.751173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.715 [2024-04-24 22:08:17.751200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:35.715 [2024-04-24 22:08:17.751230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.715 [2024-04-24 22:08:17.751249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:35.715 [2024-04-24 22:08:17.751686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.715 [2024-04-24 22:08:17.751713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:35.715 [2024-04-24 22:08:17.751736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.715 [2024-04-24 22:08:17.751753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:35.715 passed 00:16:35.715 Test: blockdev nvme passthru rw ...passed 00:16:35.715 Test: blockdev nvme passthru vendor specific ...[2024-04-24 22:08:17.833808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:35.715 [2024-04-24 22:08:17.833839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:35.715 [2024-04-24 22:08:17.834064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:35.715 [2024-04-24 22:08:17.834090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:35.715 [2024-04-24 22:08:17.834285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:35.715 [2024-04-24 22:08:17.834312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:35.715 [2024-04-24 22:08:17.834520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:35.715 [2024-04-24 22:08:17.834547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:35.715 passed 00:16:35.715 Test: blockdev nvme admin passthru ...passed 00:16:35.715 Test: blockdev copy ...passed 00:16:35.715 00:16:35.715 Run Summary: Type Total Ran Passed Failed Inactive 00:16:35.715 suites 1 1 n/a 0 0 00:16:35.715 tests 23 23 23 0 0 00:16:35.715 asserts 152 152 152 0 n/a 00:16:35.715 00:16:35.715 Elapsed time = 1.183 seconds 00:16:36.282 22:08:18 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:36.282 22:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:36.282 22:08:18 -- common/autotest_common.sh@10 -- # set +x 00:16:36.282 22:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:36.282 22:08:18 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:36.282 22:08:18 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:36.282 22:08:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:36.282 22:08:18 -- nvmf/common.sh@117 -- # sync 00:16:36.282 22:08:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:36.282 22:08:18 -- nvmf/common.sh@120 -- # set +e 00:16:36.282 22:08:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:36.282 22:08:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:36.282 rmmod nvme_tcp 00:16:36.282 rmmod nvme_fabrics 00:16:36.282 rmmod nvme_keyring 00:16:36.282 22:08:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:36.282 22:08:18 -- nvmf/common.sh@124 -- # set -e 00:16:36.282 22:08:18 -- nvmf/common.sh@125 -- # return 0 00:16:36.282 22:08:18 -- nvmf/common.sh@478 -- # '[' -n 3952713 ']' 00:16:36.282 22:08:18 -- nvmf/common.sh@479 -- # killprocess 3952713 00:16:36.282 22:08:18 -- common/autotest_common.sh@936 -- # '[' -z 3952713 ']' 00:16:36.282 22:08:18 -- common/autotest_common.sh@940 -- # kill -0 3952713 00:16:36.282 22:08:18 -- common/autotest_common.sh@941 -- # uname 00:16:36.282 22:08:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:36.282 22:08:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3952713 00:16:36.282 22:08:18 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:36.282 22:08:18 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:36.282 22:08:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3952713' 00:16:36.282 killing process with pid 3952713 00:16:36.282 22:08:18 -- common/autotest_common.sh@955 -- # kill 3952713 00:16:36.282 [2024-04-24 22:08:18.381176] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:36.282 22:08:18 -- common/autotest_common.sh@960 -- # wait 3952713 00:16:36.848 22:08:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:36.849 22:08:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:36.849 22:08:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:36.849 22:08:18 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:36.849 22:08:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:36.849 22:08:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.849 22:08:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:36.849 22:08:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.749 22:08:20 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:38.749 00:16:38.749 real 0m7.070s 00:16:38.749 user 0m12.106s 00:16:38.749 sys 0m2.806s 00:16:38.749 22:08:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:38.749 22:08:20 -- common/autotest_common.sh@10 -- # set +x 00:16:38.749 ************************************ 00:16:38.749 END TEST nvmf_bdevio_no_huge 00:16:38.749 ************************************ 00:16:38.750 22:08:20 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:38.750 22:08:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:38.750 22:08:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:38.750 22:08:20 -- common/autotest_common.sh@10 -- # set +x 00:16:38.750 ************************************ 00:16:38.750 START TEST nvmf_tls 00:16:38.750 ************************************ 00:16:38.750 22:08:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:39.009 * Looking for test storage... 00:16:39.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:39.009 22:08:21 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:39.009 22:08:21 -- nvmf/common.sh@7 -- # uname -s 00:16:39.009 22:08:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.009 22:08:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.009 22:08:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.009 22:08:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.009 22:08:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.009 22:08:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.009 22:08:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.009 22:08:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.009 22:08:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.009 22:08:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.009 22:08:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:39.009 22:08:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:39.009 22:08:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.009 22:08:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.009 22:08:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:39.009 22:08:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:39.009 22:08:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:39.009 22:08:21 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.009 22:08:21 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.009 22:08:21 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.009 22:08:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.009 22:08:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.009 22:08:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.009 22:08:21 -- paths/export.sh@5 -- # export PATH 00:16:39.009 22:08:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.009 22:08:21 -- nvmf/common.sh@47 -- # : 0 00:16:39.009 22:08:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:39.009 22:08:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:39.009 22:08:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:39.009 22:08:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.009 22:08:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.009 22:08:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:39.009 22:08:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:39.009 22:08:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:39.009 22:08:21 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:39.009 22:08:21 -- target/tls.sh@62 -- # nvmftestinit 00:16:39.009 22:08:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:39.009 22:08:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.009 22:08:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:39.009 22:08:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:39.009 22:08:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:39.009 22:08:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.009 22:08:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.009 22:08:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.009 22:08:21 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:39.009 22:08:21 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:39.009 22:08:21 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:39.009 22:08:21 -- common/autotest_common.sh@10 -- # set +x 00:16:41.574 22:08:23 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:41.574 22:08:23 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:41.574 22:08:23 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:41.574 22:08:23 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:41.574 22:08:23 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:41.574 22:08:23 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:41.574 22:08:23 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:41.574 22:08:23 -- nvmf/common.sh@295 -- # net_devs=() 00:16:41.574 22:08:23 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:41.574 22:08:23 -- nvmf/common.sh@296 -- # e810=() 00:16:41.574 22:08:23 -- nvmf/common.sh@296 -- # local -ga e810 00:16:41.574 22:08:23 -- nvmf/common.sh@297 -- # x722=() 00:16:41.574 22:08:23 -- nvmf/common.sh@297 -- # local -ga x722 00:16:41.574 22:08:23 -- nvmf/common.sh@298 -- # mlx=() 00:16:41.574 22:08:23 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:41.574 22:08:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:41.574 22:08:23 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:41.574 22:08:23 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:41.574 22:08:23 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:41.574 22:08:23 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:41.574 22:08:23 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:41.574 22:08:23 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:41.574 22:08:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:41.574 22:08:23 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:41.574 22:08:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:41.574 22:08:23 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:41.574 22:08:23 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:41.574 22:08:23 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:41.574 22:08:23 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:41.574 22:08:23 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:41.574 22:08:23 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:41.574 22:08:23 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:41.574 22:08:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:41.574 22:08:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:41.574 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:41.574 22:08:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:41.574 22:08:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:41.574 22:08:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:41.574 22:08:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:41.574 22:08:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:41.574 22:08:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:41.574 22:08:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:41.574 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:41.574 22:08:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:41.574 22:08:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:41.574 22:08:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:41.574 22:08:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:41.575 22:08:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:41.575 22:08:23 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:41.575 22:08:23 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:41.575 22:08:23 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:41.575 22:08:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:41.575 22:08:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.575 22:08:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:41.575 22:08:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.575 22:08:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:41.575 Found net devices under 0000:84:00.0: cvl_0_0 00:16:41.575 22:08:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.575 22:08:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:41.575 22:08:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.575 22:08:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:41.575 22:08:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.575 22:08:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:41.575 Found net devices under 0000:84:00.1: cvl_0_1 00:16:41.575 22:08:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.575 22:08:23 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:41.575 22:08:23 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:41.575 22:08:23 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:41.575 22:08:23 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:41.575 22:08:23 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:41.575 22:08:23 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:41.575 22:08:23 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:41.575 22:08:23 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:41.575 22:08:23 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:41.575 22:08:23 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:41.575 22:08:23 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:41.575 22:08:23 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:41.575 22:08:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:41.575 22:08:23 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:41.575 22:08:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:41.575 22:08:23 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:41.575 22:08:23 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:41.575 22:08:23 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:41.575 22:08:23 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:41.575 22:08:23 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:41.575 22:08:23 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:41.575 22:08:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:41.575 22:08:23 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:41.575 22:08:23 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:41.575 22:08:23 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:41.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:41.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:16:41.575 00:16:41.575 --- 10.0.0.2 ping statistics --- 00:16:41.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.575 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:16:41.575 22:08:23 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:41.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:41.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:16:41.575 00:16:41.575 --- 10.0.0.1 ping statistics --- 00:16:41.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.575 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:16:41.575 22:08:23 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:41.575 22:08:23 -- nvmf/common.sh@411 -- # return 0 00:16:41.575 22:08:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:41.575 22:08:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:41.575 22:08:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:41.575 22:08:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:41.575 22:08:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:41.575 22:08:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:41.575 22:08:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:41.575 22:08:23 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:41.575 22:08:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:41.575 22:08:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:41.575 22:08:23 -- common/autotest_common.sh@10 -- # set +x 00:16:41.575 22:08:23 -- nvmf/common.sh@470 -- # nvmfpid=3954968 00:16:41.575 22:08:23 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:41.575 22:08:23 -- nvmf/common.sh@471 -- # waitforlisten 3954968 00:16:41.575 22:08:23 -- common/autotest_common.sh@817 -- # '[' -z 3954968 ']' 00:16:41.575 22:08:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.575 22:08:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:41.575 22:08:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.575 22:08:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:41.575 22:08:23 -- common/autotest_common.sh@10 -- # set +x 00:16:41.575 [2024-04-24 22:08:23.519035] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:16:41.575 [2024-04-24 22:08:23.519203] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.575 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.575 [2024-04-24 22:08:23.634740] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.575 [2024-04-24 22:08:23.774148] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:41.575 [2024-04-24 22:08:23.774221] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:41.575 [2024-04-24 22:08:23.774237] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:41.575 [2024-04-24 22:08:23.774251] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:41.575 [2024-04-24 22:08:23.774263] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:41.575 [2024-04-24 22:08:23.774316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.575 22:08:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:41.575 22:08:23 -- common/autotest_common.sh@850 -- # return 0 00:16:41.575 22:08:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:41.575 22:08:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:41.575 22:08:23 -- common/autotest_common.sh@10 -- # set +x 00:16:41.835 22:08:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.835 22:08:23 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:16:41.835 22:08:23 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:42.401 true 00:16:42.401 22:08:24 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:42.401 22:08:24 -- target/tls.sh@73 -- # jq -r .tls_version 00:16:42.660 22:08:24 -- target/tls.sh@73 -- # version=0 00:16:42.660 22:08:24 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:16:42.660 22:08:24 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:42.918 22:08:25 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:42.918 22:08:25 -- target/tls.sh@81 -- # jq -r .tls_version 00:16:43.177 22:08:25 -- target/tls.sh@81 -- # version=13 00:16:43.177 22:08:25 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:16:43.177 22:08:25 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:43.435 22:08:25 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:43.435 22:08:25 -- target/tls.sh@89 -- # jq -r .tls_version 00:16:43.693 22:08:25 -- target/tls.sh@89 -- # version=7 00:16:43.693 22:08:25 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:16:43.693 22:08:25 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:43.693 22:08:25 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:16:43.951 22:08:26 -- target/tls.sh@96 -- # ktls=false 00:16:43.951 22:08:26 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:16:43.951 22:08:26 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:44.210 22:08:26 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:44.210 22:08:26 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:16:44.775 22:08:26 -- target/tls.sh@104 -- # ktls=true 00:16:44.775 22:08:26 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:16:44.775 22:08:26 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:45.033 22:08:27 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:45.033 22:08:27 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:16:45.291 22:08:27 -- target/tls.sh@112 -- # ktls=false 00:16:45.291 22:08:27 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:16:45.291 22:08:27 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:16:45.291 22:08:27 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:16:45.291 22:08:27 -- nvmf/common.sh@691 -- # local prefix key digest 00:16:45.291 22:08:27 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:16:45.291 22:08:27 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:16:45.291 22:08:27 -- nvmf/common.sh@693 -- # digest=1 00:16:45.291 22:08:27 -- nvmf/common.sh@694 -- # python - 00:16:45.291 22:08:27 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:45.291 22:08:27 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:16:45.291 22:08:27 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:16:45.291 22:08:27 -- nvmf/common.sh@691 -- # local prefix key digest 00:16:45.291 22:08:27 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:16:45.291 22:08:27 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:16:45.291 22:08:27 -- nvmf/common.sh@693 -- # digest=1 00:16:45.291 22:08:27 -- nvmf/common.sh@694 -- # python - 00:16:45.291 22:08:27 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:45.291 22:08:27 -- target/tls.sh@121 -- # mktemp 00:16:45.291 22:08:27 -- target/tls.sh@121 -- # key_path=/tmp/tmp.DCHgTHohQ3 00:16:45.291 22:08:27 -- target/tls.sh@122 -- # mktemp 00:16:45.291 22:08:27 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.3M1LyAddSE 00:16:45.291 22:08:27 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:45.291 22:08:27 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:45.291 22:08:27 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.DCHgTHohQ3 00:16:45.291 22:08:27 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.3M1LyAddSE 00:16:45.291 22:08:27 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:45.857 22:08:27 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:16:46.115 22:08:28 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.DCHgTHohQ3 00:16:46.115 22:08:28 -- target/tls.sh@49 -- # local key=/tmp/tmp.DCHgTHohQ3 00:16:46.115 22:08:28 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:46.373 [2024-04-24 22:08:28.626470] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.632 22:08:28 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:46.891 22:08:28 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:47.149 [2024-04-24 22:08:29.292207] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:47.149 [2024-04-24 22:08:29.292324] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:47.149 [2024-04-24 22:08:29.292585] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:47.149 22:08:29 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:47.716 malloc0 00:16:47.716 22:08:29 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:47.974 22:08:30 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DCHgTHohQ3 00:16:48.540 [2024-04-24 22:08:30.495764] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:48.540 22:08:30 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.DCHgTHohQ3 00:16:48.540 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.500 Initializing NVMe Controllers 00:16:58.500 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:58.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:58.500 Initialization complete. Launching workers. 00:16:58.500 ======================================================== 00:16:58.500 Latency(us) 00:16:58.500 Device Information : IOPS MiB/s Average min max 00:16:58.500 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7267.88 28.39 8808.87 1271.66 10880.54 00:16:58.500 ======================================================== 00:16:58.500 Total : 7267.88 28.39 8808.87 1271.66 10880.54 00:16:58.500 00:16:58.500 22:08:40 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DCHgTHohQ3 00:16:58.500 22:08:40 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:58.500 22:08:40 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:58.500 22:08:40 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:58.500 22:08:40 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.DCHgTHohQ3' 00:16:58.500 22:08:40 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:58.500 22:08:40 -- target/tls.sh@28 -- # bdevperf_pid=3956995 00:16:58.500 22:08:40 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:58.500 22:08:40 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:58.500 22:08:40 -- target/tls.sh@31 -- # waitforlisten 3956995 /var/tmp/bdevperf.sock 00:16:58.500 22:08:40 -- common/autotest_common.sh@817 -- # '[' -z 3956995 ']' 00:16:58.500 22:08:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:58.500 22:08:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:58.500 22:08:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:58.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:58.500 22:08:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:58.500 22:08:40 -- common/autotest_common.sh@10 -- # set +x 00:16:58.500 [2024-04-24 22:08:40.714184] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:16:58.500 [2024-04-24 22:08:40.714350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3956995 ] 00:16:58.758 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.758 [2024-04-24 22:08:40.821886] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.758 [2024-04-24 22:08:40.942217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:59.016 22:08:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:59.016 22:08:41 -- common/autotest_common.sh@850 -- # return 0 00:16:59.016 22:08:41 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DCHgTHohQ3 00:16:59.274 [2024-04-24 22:08:41.488099] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:59.274 [2024-04-24 22:08:41.488252] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:59.533 TLSTESTn1 00:16:59.533 22:08:41 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:59.791 Running I/O for 10 seconds... 00:17:09.771 00:17:09.771 Latency(us) 00:17:09.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.771 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:09.771 Verification LBA range: start 0x0 length 0x2000 00:17:09.771 TLSTESTn1 : 10.04 2656.77 10.38 0.00 0.00 48064.01 6990.51 79614.10 00:17:09.771 =================================================================================================================== 00:17:09.771 Total : 2656.77 10.38 0.00 0.00 48064.01 6990.51 79614.10 00:17:09.771 0 00:17:09.771 22:08:51 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:09.771 22:08:51 -- target/tls.sh@45 -- # killprocess 3956995 00:17:09.771 22:08:51 -- common/autotest_common.sh@936 -- # '[' -z 3956995 ']' 00:17:09.771 22:08:51 -- common/autotest_common.sh@940 -- # kill -0 3956995 00:17:09.771 22:08:51 -- common/autotest_common.sh@941 -- # uname 00:17:09.771 22:08:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:09.771 22:08:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3956995 00:17:09.771 22:08:51 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:09.771 22:08:51 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:09.771 22:08:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3956995' 00:17:09.772 killing process with pid 3956995 00:17:09.772 22:08:51 -- common/autotest_common.sh@955 -- # kill 3956995 00:17:09.772 Received shutdown signal, test time was about 10.000000 seconds 00:17:09.772 00:17:09.772 Latency(us) 00:17:09.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.772 =================================================================================================================== 00:17:09.772 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:09.772 [2024-04-24 22:08:51.906174] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:09.772 22:08:51 -- common/autotest_common.sh@960 -- # wait 3956995 00:17:10.030 22:08:52 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3M1LyAddSE 00:17:10.030 22:08:52 -- common/autotest_common.sh@638 -- # local es=0 00:17:10.030 22:08:52 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3M1LyAddSE 00:17:10.030 22:08:52 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:10.030 22:08:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:10.030 22:08:52 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:10.030 22:08:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:10.030 22:08:52 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3M1LyAddSE 00:17:10.030 22:08:52 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:10.030 22:08:52 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:10.030 22:08:52 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:10.030 22:08:52 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3M1LyAddSE' 00:17:10.030 22:08:52 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:10.030 22:08:52 -- target/tls.sh@28 -- # bdevperf_pid=3958310 00:17:10.030 22:08:52 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:10.030 22:08:52 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:10.030 22:08:52 -- target/tls.sh@31 -- # waitforlisten 3958310 /var/tmp/bdevperf.sock 00:17:10.030 22:08:52 -- common/autotest_common.sh@817 -- # '[' -z 3958310 ']' 00:17:10.030 22:08:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:10.030 22:08:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:10.030 22:08:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:10.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:10.030 22:08:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:10.030 22:08:52 -- common/autotest_common.sh@10 -- # set +x 00:17:10.030 [2024-04-24 22:08:52.279851] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:10.030 [2024-04-24 22:08:52.280016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3958310 ] 00:17:10.288 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.288 [2024-04-24 22:08:52.384135] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.288 [2024-04-24 22:08:52.502652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.578 22:08:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:10.578 22:08:52 -- common/autotest_common.sh@850 -- # return 0 00:17:10.578 22:08:52 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3M1LyAddSE 00:17:11.162 [2024-04-24 22:08:53.164688] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:11.162 [2024-04-24 22:08:53.164871] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:11.162 [2024-04-24 22:08:53.172322] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:11.162 [2024-04-24 22:08:53.173254] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb58870 (107): Transport endpoint is not connected 00:17:11.162 [2024-04-24 22:08:53.174242] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb58870 (9): Bad file descriptor 00:17:11.162 [2024-04-24 22:08:53.175241] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:11.162 [2024-04-24 22:08:53.175267] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:11.162 [2024-04-24 22:08:53.175283] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:11.162 request: 00:17:11.162 { 00:17:11.162 "name": "TLSTEST", 00:17:11.162 "trtype": "tcp", 00:17:11.162 "traddr": "10.0.0.2", 00:17:11.162 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:11.162 "adrfam": "ipv4", 00:17:11.162 "trsvcid": "4420", 00:17:11.162 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.162 "psk": "/tmp/tmp.3M1LyAddSE", 00:17:11.162 "method": "bdev_nvme_attach_controller", 00:17:11.162 "req_id": 1 00:17:11.162 } 00:17:11.162 Got JSON-RPC error response 00:17:11.162 response: 00:17:11.162 { 00:17:11.162 "code": -32602, 00:17:11.162 "message": "Invalid parameters" 00:17:11.162 } 00:17:11.162 22:08:53 -- target/tls.sh@36 -- # killprocess 3958310 00:17:11.162 22:08:53 -- common/autotest_common.sh@936 -- # '[' -z 3958310 ']' 00:17:11.162 22:08:53 -- common/autotest_common.sh@940 -- # kill -0 3958310 00:17:11.162 22:08:53 -- common/autotest_common.sh@941 -- # uname 00:17:11.162 22:08:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:11.162 22:08:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3958310 00:17:11.162 22:08:53 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:11.162 22:08:53 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:11.162 22:08:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3958310' 00:17:11.162 killing process with pid 3958310 00:17:11.162 22:08:53 -- common/autotest_common.sh@955 -- # kill 3958310 00:17:11.162 Received shutdown signal, test time was about 10.000000 seconds 00:17:11.162 00:17:11.162 Latency(us) 00:17:11.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.162 =================================================================================================================== 00:17:11.162 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:11.162 [2024-04-24 22:08:53.225976] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:11.162 22:08:53 -- common/autotest_common.sh@960 -- # wait 3958310 00:17:11.421 22:08:53 -- target/tls.sh@37 -- # return 1 00:17:11.421 22:08:53 -- common/autotest_common.sh@641 -- # es=1 00:17:11.421 22:08:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:11.421 22:08:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:11.421 22:08:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:11.421 22:08:53 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.DCHgTHohQ3 00:17:11.421 22:08:53 -- common/autotest_common.sh@638 -- # local es=0 00:17:11.421 22:08:53 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.DCHgTHohQ3 00:17:11.421 22:08:53 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:11.422 22:08:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:11.422 22:08:53 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:11.422 22:08:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:11.422 22:08:53 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.DCHgTHohQ3 00:17:11.422 22:08:53 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:11.422 22:08:53 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:11.422 22:08:53 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:11.422 22:08:53 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.DCHgTHohQ3' 00:17:11.422 22:08:53 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:11.422 22:08:53 -- target/tls.sh@28 -- # bdevperf_pid=3958453 00:17:11.422 22:08:53 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:11.422 22:08:53 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:11.422 22:08:53 -- target/tls.sh@31 -- # waitforlisten 3958453 /var/tmp/bdevperf.sock 00:17:11.422 22:08:53 -- common/autotest_common.sh@817 -- # '[' -z 3958453 ']' 00:17:11.422 22:08:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:11.422 22:08:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:11.422 22:08:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:11.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:11.422 22:08:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:11.422 22:08:53 -- common/autotest_common.sh@10 -- # set +x 00:17:11.422 [2024-04-24 22:08:53.594842] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:11.422 [2024-04-24 22:08:53.595015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3958453 ] 00:17:11.422 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.680 [2024-04-24 22:08:53.694078] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.680 [2024-04-24 22:08:53.815579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.938 22:08:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:11.938 22:08:54 -- common/autotest_common.sh@850 -- # return 0 00:17:11.938 22:08:54 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.DCHgTHohQ3 00:17:12.196 [2024-04-24 22:08:54.341029] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:12.196 [2024-04-24 22:08:54.341175] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:12.196 [2024-04-24 22:08:54.346703] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:12.196 [2024-04-24 22:08:54.346740] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:12.197 [2024-04-24 22:08:54.346785] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:12.197 [2024-04-24 22:08:54.347312] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21df870 (107): Transport endpoint is not connected 00:17:12.197 [2024-04-24 22:08:54.348300] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21df870 (9): Bad file descriptor 00:17:12.197 [2024-04-24 22:08:54.349299] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:12.197 [2024-04-24 22:08:54.349325] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:12.197 [2024-04-24 22:08:54.349340] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:12.197 request: 00:17:12.197 { 00:17:12.197 "name": "TLSTEST", 00:17:12.197 "trtype": "tcp", 00:17:12.197 "traddr": "10.0.0.2", 00:17:12.197 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:12.197 "adrfam": "ipv4", 00:17:12.197 "trsvcid": "4420", 00:17:12.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:12.197 "psk": "/tmp/tmp.DCHgTHohQ3", 00:17:12.197 "method": "bdev_nvme_attach_controller", 00:17:12.197 "req_id": 1 00:17:12.197 } 00:17:12.197 Got JSON-RPC error response 00:17:12.197 response: 00:17:12.197 { 00:17:12.197 "code": -32602, 00:17:12.197 "message": "Invalid parameters" 00:17:12.197 } 00:17:12.197 22:08:54 -- target/tls.sh@36 -- # killprocess 3958453 00:17:12.197 22:08:54 -- common/autotest_common.sh@936 -- # '[' -z 3958453 ']' 00:17:12.197 22:08:54 -- common/autotest_common.sh@940 -- # kill -0 3958453 00:17:12.197 22:08:54 -- common/autotest_common.sh@941 -- # uname 00:17:12.197 22:08:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:12.197 22:08:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3958453 00:17:12.197 22:08:54 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:12.197 22:08:54 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:12.197 22:08:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3958453' 00:17:12.197 killing process with pid 3958453 00:17:12.197 22:08:54 -- common/autotest_common.sh@955 -- # kill 3958453 00:17:12.197 Received shutdown signal, test time was about 10.000000 seconds 00:17:12.197 00:17:12.197 Latency(us) 00:17:12.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.197 =================================================================================================================== 00:17:12.197 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:12.197 [2024-04-24 22:08:54.402896] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:12.197 22:08:54 -- common/autotest_common.sh@960 -- # wait 3958453 00:17:12.455 22:08:54 -- target/tls.sh@37 -- # return 1 00:17:12.455 22:08:54 -- common/autotest_common.sh@641 -- # es=1 00:17:12.455 22:08:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:12.455 22:08:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:12.455 22:08:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:12.455 22:08:54 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.DCHgTHohQ3 00:17:12.455 22:08:54 -- common/autotest_common.sh@638 -- # local es=0 00:17:12.455 22:08:54 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.DCHgTHohQ3 00:17:12.455 22:08:54 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:12.455 22:08:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:12.455 22:08:54 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:12.455 22:08:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:12.455 22:08:54 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.DCHgTHohQ3 00:17:12.455 22:08:54 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:12.455 22:08:54 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:12.455 22:08:54 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:12.455 22:08:54 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.DCHgTHohQ3' 00:17:12.455 22:08:54 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:12.455 22:08:54 -- target/tls.sh@28 -- # bdevperf_pid=3958594 00:17:12.455 22:08:54 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:12.455 22:08:54 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:12.455 22:08:54 -- target/tls.sh@31 -- # waitforlisten 3958594 /var/tmp/bdevperf.sock 00:17:12.455 22:08:54 -- common/autotest_common.sh@817 -- # '[' -z 3958594 ']' 00:17:12.455 22:08:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:12.455 22:08:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:12.455 22:08:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:12.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:12.455 22:08:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:12.455 22:08:54 -- common/autotest_common.sh@10 -- # set +x 00:17:12.714 [2024-04-24 22:08:54.719387] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:12.714 [2024-04-24 22:08:54.719496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3958594 ] 00:17:12.714 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.714 [2024-04-24 22:08:54.795010] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.714 [2024-04-24 22:08:54.915832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.972 22:08:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:12.972 22:08:55 -- common/autotest_common.sh@850 -- # return 0 00:17:12.972 22:08:55 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DCHgTHohQ3 00:17:13.230 [2024-04-24 22:08:55.357726] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:13.230 [2024-04-24 22:08:55.357879] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:13.230 [2024-04-24 22:08:55.367840] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:13.230 [2024-04-24 22:08:55.367876] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:13.230 [2024-04-24 22:08:55.367921] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:13.230 [2024-04-24 22:08:55.368189] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf04870 (107): Transport endpoint is not connected 00:17:13.230 [2024-04-24 22:08:55.369178] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf04870 (9): Bad file descriptor 00:17:13.230 [2024-04-24 22:08:55.370178] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:13.230 [2024-04-24 22:08:55.370203] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:13.230 [2024-04-24 22:08:55.370219] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:13.230 request: 00:17:13.230 { 00:17:13.230 "name": "TLSTEST", 00:17:13.230 "trtype": "tcp", 00:17:13.230 "traddr": "10.0.0.2", 00:17:13.230 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:13.230 "adrfam": "ipv4", 00:17:13.230 "trsvcid": "4420", 00:17:13.230 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:13.230 "psk": "/tmp/tmp.DCHgTHohQ3", 00:17:13.230 "method": "bdev_nvme_attach_controller", 00:17:13.230 "req_id": 1 00:17:13.230 } 00:17:13.230 Got JSON-RPC error response 00:17:13.230 response: 00:17:13.230 { 00:17:13.230 "code": -32602, 00:17:13.230 "message": "Invalid parameters" 00:17:13.230 } 00:17:13.230 22:08:55 -- target/tls.sh@36 -- # killprocess 3958594 00:17:13.230 22:08:55 -- common/autotest_common.sh@936 -- # '[' -z 3958594 ']' 00:17:13.230 22:08:55 -- common/autotest_common.sh@940 -- # kill -0 3958594 00:17:13.230 22:08:55 -- common/autotest_common.sh@941 -- # uname 00:17:13.230 22:08:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:13.230 22:08:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3958594 00:17:13.230 22:08:55 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:13.230 22:08:55 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:13.230 22:08:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3958594' 00:17:13.231 killing process with pid 3958594 00:17:13.231 22:08:55 -- common/autotest_common.sh@955 -- # kill 3958594 00:17:13.231 Received shutdown signal, test time was about 10.000000 seconds 00:17:13.231 00:17:13.231 Latency(us) 00:17:13.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.231 =================================================================================================================== 00:17:13.231 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:13.231 [2024-04-24 22:08:55.425193] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:13.231 22:08:55 -- common/autotest_common.sh@960 -- # wait 3958594 00:17:13.489 22:08:55 -- target/tls.sh@37 -- # return 1 00:17:13.489 22:08:55 -- common/autotest_common.sh@641 -- # es=1 00:17:13.489 22:08:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:13.489 22:08:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:13.489 22:08:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:13.489 22:08:55 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:13.489 22:08:55 -- common/autotest_common.sh@638 -- # local es=0 00:17:13.489 22:08:55 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:13.489 22:08:55 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:13.489 22:08:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:13.489 22:08:55 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:13.489 22:08:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:13.489 22:08:55 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:13.489 22:08:55 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:13.489 22:08:55 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:13.489 22:08:55 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:13.489 22:08:55 -- target/tls.sh@23 -- # psk= 00:17:13.489 22:08:55 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:13.489 22:08:55 -- target/tls.sh@28 -- # bdevperf_pid=3958731 00:17:13.489 22:08:55 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:13.489 22:08:55 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:13.489 22:08:55 -- target/tls.sh@31 -- # waitforlisten 3958731 /var/tmp/bdevperf.sock 00:17:13.489 22:08:55 -- common/autotest_common.sh@817 -- # '[' -z 3958731 ']' 00:17:13.489 22:08:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:13.489 22:08:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:13.489 22:08:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:13.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:13.489 22:08:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:13.489 22:08:55 -- common/autotest_common.sh@10 -- # set +x 00:17:13.748 [2024-04-24 22:08:55.760475] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:13.748 [2024-04-24 22:08:55.760574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3958731 ] 00:17:13.748 EAL: No free 2048 kB hugepages reported on node 1 00:17:13.748 [2024-04-24 22:08:55.836475] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.748 [2024-04-24 22:08:55.957195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.004 22:08:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:14.004 22:08:56 -- common/autotest_common.sh@850 -- # return 0 00:17:14.004 22:08:56 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:14.262 [2024-04-24 22:08:56.400573] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:14.262 [2024-04-24 22:08:56.402260] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10db1c0 (9): Bad file descriptor 00:17:14.262 [2024-04-24 22:08:56.403256] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:14.262 [2024-04-24 22:08:56.403291] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:14.262 [2024-04-24 22:08:56.403308] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:14.262 request: 00:17:14.262 { 00:17:14.262 "name": "TLSTEST", 00:17:14.262 "trtype": "tcp", 00:17:14.262 "traddr": "10.0.0.2", 00:17:14.262 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:14.262 "adrfam": "ipv4", 00:17:14.262 "trsvcid": "4420", 00:17:14.262 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:14.262 "method": "bdev_nvme_attach_controller", 00:17:14.262 "req_id": 1 00:17:14.262 } 00:17:14.262 Got JSON-RPC error response 00:17:14.262 response: 00:17:14.262 { 00:17:14.262 "code": -32602, 00:17:14.262 "message": "Invalid parameters" 00:17:14.262 } 00:17:14.262 22:08:56 -- target/tls.sh@36 -- # killprocess 3958731 00:17:14.262 22:08:56 -- common/autotest_common.sh@936 -- # '[' -z 3958731 ']' 00:17:14.262 22:08:56 -- common/autotest_common.sh@940 -- # kill -0 3958731 00:17:14.262 22:08:56 -- common/autotest_common.sh@941 -- # uname 00:17:14.262 22:08:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:14.262 22:08:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3958731 00:17:14.262 22:08:56 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:14.262 22:08:56 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:14.262 22:08:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3958731' 00:17:14.262 killing process with pid 3958731 00:17:14.262 22:08:56 -- common/autotest_common.sh@955 -- # kill 3958731 00:17:14.262 Received shutdown signal, test time was about 10.000000 seconds 00:17:14.262 00:17:14.262 Latency(us) 00:17:14.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.262 =================================================================================================================== 00:17:14.262 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:14.262 22:08:56 -- common/autotest_common.sh@960 -- # wait 3958731 00:17:14.520 22:08:56 -- target/tls.sh@37 -- # return 1 00:17:14.520 22:08:56 -- common/autotest_common.sh@641 -- # es=1 00:17:14.520 22:08:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:14.520 22:08:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:14.520 22:08:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:14.520 22:08:56 -- target/tls.sh@158 -- # killprocess 3954968 00:17:14.520 22:08:56 -- common/autotest_common.sh@936 -- # '[' -z 3954968 ']' 00:17:14.520 22:08:56 -- common/autotest_common.sh@940 -- # kill -0 3954968 00:17:14.520 22:08:56 -- common/autotest_common.sh@941 -- # uname 00:17:14.520 22:08:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:14.520 22:08:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3954968 00:17:14.520 22:08:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:14.520 22:08:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:14.520 22:08:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3954968' 00:17:14.520 killing process with pid 3954968 00:17:14.520 22:08:56 -- common/autotest_common.sh@955 -- # kill 3954968 00:17:14.520 [2024-04-24 22:08:56.770506] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:14.520 [2024-04-24 22:08:56.770564] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:14.520 22:08:56 -- common/autotest_common.sh@960 -- # wait 3954968 00:17:15.086 22:08:57 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:15.086 22:08:57 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:15.086 22:08:57 -- nvmf/common.sh@691 -- # local prefix key digest 00:17:15.086 22:08:57 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:17:15.086 22:08:57 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:15.086 22:08:57 -- nvmf/common.sh@693 -- # digest=2 00:17:15.086 22:08:57 -- nvmf/common.sh@694 -- # python - 00:17:15.086 22:08:57 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:15.086 22:08:57 -- target/tls.sh@160 -- # mktemp 00:17:15.086 22:08:57 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.LsrurwB9IF 00:17:15.086 22:08:57 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:15.086 22:08:57 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.LsrurwB9IF 00:17:15.086 22:08:57 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:15.086 22:08:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:15.086 22:08:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:15.086 22:08:57 -- common/autotest_common.sh@10 -- # set +x 00:17:15.086 22:08:57 -- nvmf/common.sh@470 -- # nvmfpid=3958930 00:17:15.086 22:08:57 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:15.086 22:08:57 -- nvmf/common.sh@471 -- # waitforlisten 3958930 00:17:15.086 22:08:57 -- common/autotest_common.sh@817 -- # '[' -z 3958930 ']' 00:17:15.086 22:08:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.086 22:08:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:15.086 22:08:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.086 22:08:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:15.086 22:08:57 -- common/autotest_common.sh@10 -- # set +x 00:17:15.086 [2024-04-24 22:08:57.185638] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:15.086 [2024-04-24 22:08:57.185748] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.086 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.086 [2024-04-24 22:08:57.269652] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.344 [2024-04-24 22:08:57.409777] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:15.344 [2024-04-24 22:08:57.409852] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:15.344 [2024-04-24 22:08:57.409868] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:15.344 [2024-04-24 22:08:57.409881] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:15.344 [2024-04-24 22:08:57.409910] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:15.344 [2024-04-24 22:08:57.409957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.344 22:08:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:15.344 22:08:57 -- common/autotest_common.sh@850 -- # return 0 00:17:15.345 22:08:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:15.345 22:08:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:15.345 22:08:57 -- common/autotest_common.sh@10 -- # set +x 00:17:15.345 22:08:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.345 22:08:57 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.LsrurwB9IF 00:17:15.345 22:08:57 -- target/tls.sh@49 -- # local key=/tmp/tmp.LsrurwB9IF 00:17:15.345 22:08:57 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:15.911 [2024-04-24 22:08:57.864147] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.911 22:08:57 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:16.169 22:08:58 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:16.427 [2024-04-24 22:08:58.549924] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:16.427 [2024-04-24 22:08:58.550030] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:16.427 [2024-04-24 22:08:58.550272] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.427 22:08:58 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:16.993 malloc0 00:17:16.993 22:08:59 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:17.559 22:08:59 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LsrurwB9IF 00:17:17.559 [2024-04-24 22:08:59.778060] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:17.559 22:08:59 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LsrurwB9IF 00:17:17.559 22:08:59 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:17.559 22:08:59 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:17.559 22:08:59 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:17.559 22:08:59 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.LsrurwB9IF' 00:17:17.559 22:08:59 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:17.559 22:08:59 -- target/tls.sh@28 -- # bdevperf_pid=3959292 00:17:17.559 22:08:59 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:17.559 22:08:59 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:17.559 22:08:59 -- target/tls.sh@31 -- # waitforlisten 3959292 /var/tmp/bdevperf.sock 00:17:17.559 22:08:59 -- common/autotest_common.sh@817 -- # '[' -z 3959292 ']' 00:17:17.559 22:08:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:17.559 22:08:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:17.559 22:08:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:17.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:17.559 22:08:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:17.559 22:08:59 -- common/autotest_common.sh@10 -- # set +x 00:17:17.818 [2024-04-24 22:08:59.867776] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:17.818 [2024-04-24 22:08:59.867934] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3959292 ] 00:17:17.818 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.818 [2024-04-24 22:08:59.972888] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.076 [2024-04-24 22:09:00.099429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.642 22:09:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:18.642 22:09:00 -- common/autotest_common.sh@850 -- # return 0 00:17:18.642 22:09:00 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LsrurwB9IF 00:17:18.899 [2024-04-24 22:09:01.129629] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:18.899 [2024-04-24 22:09:01.129791] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:19.157 TLSTESTn1 00:17:19.157 22:09:01 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:19.157 Running I/O for 10 seconds... 00:17:31.358 00:17:31.358 Latency(us) 00:17:31.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.358 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:31.358 Verification LBA range: start 0x0 length 0x2000 00:17:31.358 TLSTESTn1 : 10.03 3067.71 11.98 0.00 0.00 41637.43 6602.15 55535.69 00:17:31.358 =================================================================================================================== 00:17:31.358 Total : 3067.71 11.98 0.00 0.00 41637.43 6602.15 55535.69 00:17:31.358 0 00:17:31.358 22:09:11 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:31.358 22:09:11 -- target/tls.sh@45 -- # killprocess 3959292 00:17:31.358 22:09:11 -- common/autotest_common.sh@936 -- # '[' -z 3959292 ']' 00:17:31.358 22:09:11 -- common/autotest_common.sh@940 -- # kill -0 3959292 00:17:31.358 22:09:11 -- common/autotest_common.sh@941 -- # uname 00:17:31.358 22:09:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:31.358 22:09:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3959292 00:17:31.358 22:09:11 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:31.358 22:09:11 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:31.358 22:09:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3959292' 00:17:31.358 killing process with pid 3959292 00:17:31.358 22:09:11 -- common/autotest_common.sh@955 -- # kill 3959292 00:17:31.359 Received shutdown signal, test time was about 10.000000 seconds 00:17:31.359 00:17:31.359 Latency(us) 00:17:31.359 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.359 =================================================================================================================== 00:17:31.359 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:31.359 [2024-04-24 22:09:11.482608] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:31.359 22:09:11 -- common/autotest_common.sh@960 -- # wait 3959292 00:17:31.359 22:09:11 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.LsrurwB9IF 00:17:31.359 22:09:11 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LsrurwB9IF 00:17:31.359 22:09:11 -- common/autotest_common.sh@638 -- # local es=0 00:17:31.359 22:09:11 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LsrurwB9IF 00:17:31.359 22:09:11 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:31.359 22:09:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:31.359 22:09:11 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:31.359 22:09:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:31.359 22:09:11 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LsrurwB9IF 00:17:31.359 22:09:11 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:31.359 22:09:11 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:31.359 22:09:11 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:31.359 22:09:11 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.LsrurwB9IF' 00:17:31.359 22:09:11 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:31.359 22:09:11 -- target/tls.sh@28 -- # bdevperf_pid=3961240 00:17:31.359 22:09:11 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:31.359 22:09:11 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:31.359 22:09:11 -- target/tls.sh@31 -- # waitforlisten 3961240 /var/tmp/bdevperf.sock 00:17:31.359 22:09:11 -- common/autotest_common.sh@817 -- # '[' -z 3961240 ']' 00:17:31.359 22:09:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:31.359 22:09:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:31.359 22:09:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:31.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:31.359 22:09:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:31.359 22:09:11 -- common/autotest_common.sh@10 -- # set +x 00:17:31.359 [2024-04-24 22:09:11.859845] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:31.359 [2024-04-24 22:09:11.860010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3961240 ] 00:17:31.359 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.359 [2024-04-24 22:09:11.965986] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.359 [2024-04-24 22:09:12.083791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.359 22:09:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:31.359 22:09:12 -- common/autotest_common.sh@850 -- # return 0 00:17:31.359 22:09:12 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LsrurwB9IF 00:17:31.359 [2024-04-24 22:09:12.540472] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:31.359 [2024-04-24 22:09:12.540560] bdev_nvme.c:6054:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:31.359 [2024-04-24 22:09:12.540578] bdev_nvme.c:6163:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.LsrurwB9IF 00:17:31.359 request: 00:17:31.359 { 00:17:31.359 "name": "TLSTEST", 00:17:31.359 "trtype": "tcp", 00:17:31.359 "traddr": "10.0.0.2", 00:17:31.359 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:31.359 "adrfam": "ipv4", 00:17:31.359 "trsvcid": "4420", 00:17:31.359 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:31.359 "psk": "/tmp/tmp.LsrurwB9IF", 00:17:31.359 "method": "bdev_nvme_attach_controller", 00:17:31.359 "req_id": 1 00:17:31.359 } 00:17:31.359 Got JSON-RPC error response 00:17:31.359 response: 00:17:31.359 { 00:17:31.359 "code": -1, 00:17:31.359 "message": "Operation not permitted" 00:17:31.359 } 00:17:31.359 22:09:12 -- target/tls.sh@36 -- # killprocess 3961240 00:17:31.359 22:09:12 -- common/autotest_common.sh@936 -- # '[' -z 3961240 ']' 00:17:31.359 22:09:12 -- common/autotest_common.sh@940 -- # kill -0 3961240 00:17:31.359 22:09:12 -- common/autotest_common.sh@941 -- # uname 00:17:31.359 22:09:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:31.359 22:09:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3961240 00:17:31.359 22:09:12 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:31.359 22:09:12 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:31.359 22:09:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3961240' 00:17:31.359 killing process with pid 3961240 00:17:31.359 22:09:12 -- common/autotest_common.sh@955 -- # kill 3961240 00:17:31.359 Received shutdown signal, test time was about 10.000000 seconds 00:17:31.359 00:17:31.359 Latency(us) 00:17:31.359 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.359 =================================================================================================================== 00:17:31.359 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:31.359 22:09:12 -- common/autotest_common.sh@960 -- # wait 3961240 00:17:31.359 22:09:12 -- target/tls.sh@37 -- # return 1 00:17:31.359 22:09:12 -- common/autotest_common.sh@641 -- # es=1 00:17:31.359 22:09:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:31.359 22:09:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:31.359 22:09:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:31.359 22:09:12 -- target/tls.sh@174 -- # killprocess 3958930 00:17:31.359 22:09:12 -- common/autotest_common.sh@936 -- # '[' -z 3958930 ']' 00:17:31.359 22:09:12 -- common/autotest_common.sh@940 -- # kill -0 3958930 00:17:31.359 22:09:12 -- common/autotest_common.sh@941 -- # uname 00:17:31.359 22:09:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:31.359 22:09:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3958930 00:17:31.359 22:09:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:31.359 22:09:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:31.359 22:09:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3958930' 00:17:31.359 killing process with pid 3958930 00:17:31.359 22:09:12 -- common/autotest_common.sh@955 -- # kill 3958930 00:17:31.359 [2024-04-24 22:09:12.874902] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:31.359 [2024-04-24 22:09:12.874975] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:31.359 22:09:12 -- common/autotest_common.sh@960 -- # wait 3958930 00:17:31.359 22:09:13 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:17:31.359 22:09:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:31.359 22:09:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:31.359 22:09:13 -- common/autotest_common.sh@10 -- # set +x 00:17:31.359 22:09:13 -- nvmf/common.sh@470 -- # nvmfpid=3961425 00:17:31.359 22:09:13 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:31.359 22:09:13 -- nvmf/common.sh@471 -- # waitforlisten 3961425 00:17:31.359 22:09:13 -- common/autotest_common.sh@817 -- # '[' -z 3961425 ']' 00:17:31.359 22:09:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.359 22:09:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:31.359 22:09:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.359 22:09:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:31.359 22:09:13 -- common/autotest_common.sh@10 -- # set +x 00:17:31.359 [2024-04-24 22:09:13.222544] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:31.359 [2024-04-24 22:09:13.222634] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.359 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.359 [2024-04-24 22:09:13.305601] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.359 [2024-04-24 22:09:13.445390] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.359 [2024-04-24 22:09:13.445492] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.359 [2024-04-24 22:09:13.445509] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.359 [2024-04-24 22:09:13.445524] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.359 [2024-04-24 22:09:13.445536] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.359 [2024-04-24 22:09:13.445578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.359 22:09:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:31.359 22:09:13 -- common/autotest_common.sh@850 -- # return 0 00:17:31.359 22:09:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:31.359 22:09:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:31.359 22:09:13 -- common/autotest_common.sh@10 -- # set +x 00:17:31.618 22:09:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.618 22:09:13 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.LsrurwB9IF 00:17:31.618 22:09:13 -- common/autotest_common.sh@638 -- # local es=0 00:17:31.618 22:09:13 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.LsrurwB9IF 00:17:31.618 22:09:13 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:17:31.618 22:09:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:31.618 22:09:13 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:17:31.618 22:09:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:31.618 22:09:13 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.LsrurwB9IF 00:17:31.618 22:09:13 -- target/tls.sh@49 -- # local key=/tmp/tmp.LsrurwB9IF 00:17:31.618 22:09:13 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:32.183 [2024-04-24 22:09:14.175714] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:32.184 22:09:14 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:32.441 22:09:14 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:32.699 [2024-04-24 22:09:14.833406] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:32.699 [2024-04-24 22:09:14.833511] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:32.699 [2024-04-24 22:09:14.833766] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.699 22:09:14 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:32.957 malloc0 00:17:32.957 22:09:15 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:33.522 22:09:15 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LsrurwB9IF 00:17:33.780 [2024-04-24 22:09:15.820527] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:33.780 [2024-04-24 22:09:15.820572] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:33.780 [2024-04-24 22:09:15.820606] subsystem.c: 967:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:33.780 request: 00:17:33.780 { 00:17:33.780 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.780 "host": "nqn.2016-06.io.spdk:host1", 00:17:33.780 "psk": "/tmp/tmp.LsrurwB9IF", 00:17:33.780 "method": "nvmf_subsystem_add_host", 00:17:33.780 "req_id": 1 00:17:33.780 } 00:17:33.780 Got JSON-RPC error response 00:17:33.780 response: 00:17:33.780 { 00:17:33.780 "code": -32603, 00:17:33.780 "message": "Internal error" 00:17:33.780 } 00:17:33.780 22:09:15 -- common/autotest_common.sh@641 -- # es=1 00:17:33.780 22:09:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:33.780 22:09:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:33.780 22:09:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:33.780 22:09:15 -- target/tls.sh@180 -- # killprocess 3961425 00:17:33.780 22:09:15 -- common/autotest_common.sh@936 -- # '[' -z 3961425 ']' 00:17:33.780 22:09:15 -- common/autotest_common.sh@940 -- # kill -0 3961425 00:17:33.780 22:09:15 -- common/autotest_common.sh@941 -- # uname 00:17:33.780 22:09:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:33.780 22:09:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3961425 00:17:33.780 22:09:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:33.780 22:09:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:33.780 22:09:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3961425' 00:17:33.780 killing process with pid 3961425 00:17:33.780 22:09:15 -- common/autotest_common.sh@955 -- # kill 3961425 00:17:33.780 [2024-04-24 22:09:15.870391] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:33.780 22:09:15 -- common/autotest_common.sh@960 -- # wait 3961425 00:17:34.038 22:09:16 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.LsrurwB9IF 00:17:34.038 22:09:16 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:34.038 22:09:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:34.038 22:09:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:34.038 22:09:16 -- common/autotest_common.sh@10 -- # set +x 00:17:34.038 22:09:16 -- nvmf/common.sh@470 -- # nvmfpid=3961814 00:17:34.038 22:09:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:34.038 22:09:16 -- nvmf/common.sh@471 -- # waitforlisten 3961814 00:17:34.038 22:09:16 -- common/autotest_common.sh@817 -- # '[' -z 3961814 ']' 00:17:34.038 22:09:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.038 22:09:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:34.038 22:09:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.038 22:09:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:34.038 22:09:16 -- common/autotest_common.sh@10 -- # set +x 00:17:34.038 [2024-04-24 22:09:16.206285] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:34.038 [2024-04-24 22:09:16.206378] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.038 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.038 [2024-04-24 22:09:16.281138] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.296 [2024-04-24 22:09:16.402056] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.296 [2024-04-24 22:09:16.402133] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.296 [2024-04-24 22:09:16.402150] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:34.296 [2024-04-24 22:09:16.402164] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:34.296 [2024-04-24 22:09:16.402176] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.296 [2024-04-24 22:09:16.402221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.296 22:09:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:34.296 22:09:16 -- common/autotest_common.sh@850 -- # return 0 00:17:34.296 22:09:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:34.296 22:09:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:34.296 22:09:16 -- common/autotest_common.sh@10 -- # set +x 00:17:34.296 22:09:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.296 22:09:16 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.LsrurwB9IF 00:17:34.553 22:09:16 -- target/tls.sh@49 -- # local key=/tmp/tmp.LsrurwB9IF 00:17:34.553 22:09:16 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:34.811 [2024-04-24 22:09:16.879063] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:34.811 22:09:16 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:35.068 22:09:17 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:35.327 [2024-04-24 22:09:17.540812] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:35.327 [2024-04-24 22:09:17.540942] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:35.327 [2024-04-24 22:09:17.541194] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:35.327 22:09:17 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:35.922 malloc0 00:17:35.922 22:09:17 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:36.487 22:09:18 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LsrurwB9IF 00:17:36.746 [2024-04-24 22:09:18.752833] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:36.746 22:09:18 -- target/tls.sh@188 -- # bdevperf_pid=3962102 00:17:36.746 22:09:18 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:36.746 22:09:18 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:36.746 22:09:18 -- target/tls.sh@191 -- # waitforlisten 3962102 /var/tmp/bdevperf.sock 00:17:36.746 22:09:18 -- common/autotest_common.sh@817 -- # '[' -z 3962102 ']' 00:17:36.746 22:09:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:36.746 22:09:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:36.746 22:09:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:36.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:36.746 22:09:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:36.746 22:09:18 -- common/autotest_common.sh@10 -- # set +x 00:17:36.746 [2024-04-24 22:09:18.824467] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:36.746 [2024-04-24 22:09:18.824566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3962102 ] 00:17:36.746 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.746 [2024-04-24 22:09:18.900345] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.004 [2024-04-24 22:09:19.023059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.004 22:09:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:37.004 22:09:19 -- common/autotest_common.sh@850 -- # return 0 00:17:37.004 22:09:19 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LsrurwB9IF 00:17:37.569 [2024-04-24 22:09:19.595024] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:37.569 [2024-04-24 22:09:19.595195] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:37.569 TLSTESTn1 00:17:37.569 22:09:19 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:17:38.135 22:09:20 -- target/tls.sh@196 -- # tgtconf='{ 00:17:38.135 "subsystems": [ 00:17:38.135 { 00:17:38.135 "subsystem": "keyring", 00:17:38.135 "config": [] 00:17:38.135 }, 00:17:38.135 { 00:17:38.135 "subsystem": "iobuf", 00:17:38.135 "config": [ 00:17:38.135 { 00:17:38.135 "method": "iobuf_set_options", 00:17:38.135 "params": { 00:17:38.135 "small_pool_count": 8192, 00:17:38.135 "large_pool_count": 1024, 00:17:38.135 "small_bufsize": 8192, 00:17:38.135 "large_bufsize": 135168 00:17:38.135 } 00:17:38.135 } 00:17:38.135 ] 00:17:38.135 }, 00:17:38.135 { 00:17:38.135 "subsystem": "sock", 00:17:38.135 "config": [ 00:17:38.135 { 00:17:38.135 "method": "sock_impl_set_options", 00:17:38.135 "params": { 00:17:38.135 "impl_name": "posix", 00:17:38.135 "recv_buf_size": 2097152, 00:17:38.135 "send_buf_size": 2097152, 00:17:38.135 "enable_recv_pipe": true, 00:17:38.135 "enable_quickack": false, 00:17:38.135 "enable_placement_id": 0, 00:17:38.135 "enable_zerocopy_send_server": true, 00:17:38.135 "enable_zerocopy_send_client": false, 00:17:38.135 "zerocopy_threshold": 0, 00:17:38.135 "tls_version": 0, 00:17:38.135 "enable_ktls": false 00:17:38.135 } 00:17:38.135 }, 00:17:38.135 { 00:17:38.135 "method": "sock_impl_set_options", 00:17:38.135 "params": { 00:17:38.135 "impl_name": "ssl", 00:17:38.135 "recv_buf_size": 4096, 00:17:38.135 "send_buf_size": 4096, 00:17:38.135 "enable_recv_pipe": true, 00:17:38.135 "enable_quickack": false, 00:17:38.135 "enable_placement_id": 0, 00:17:38.135 "enable_zerocopy_send_server": true, 00:17:38.135 "enable_zerocopy_send_client": false, 00:17:38.135 "zerocopy_threshold": 0, 00:17:38.135 "tls_version": 0, 00:17:38.135 "enable_ktls": false 00:17:38.135 } 00:17:38.135 } 00:17:38.135 ] 00:17:38.135 }, 00:17:38.135 { 00:17:38.135 "subsystem": "vmd", 00:17:38.135 "config": [] 00:17:38.135 }, 00:17:38.135 { 00:17:38.135 "subsystem": "accel", 00:17:38.135 "config": [ 00:17:38.135 { 00:17:38.135 "method": "accel_set_options", 00:17:38.135 "params": { 00:17:38.135 "small_cache_size": 128, 00:17:38.135 "large_cache_size": 16, 00:17:38.135 "task_count": 2048, 00:17:38.135 "sequence_count": 2048, 00:17:38.135 "buf_count": 2048 00:17:38.135 } 00:17:38.135 } 00:17:38.135 ] 00:17:38.135 }, 00:17:38.135 { 00:17:38.135 "subsystem": "bdev", 00:17:38.135 "config": [ 00:17:38.135 { 00:17:38.135 "method": "bdev_set_options", 00:17:38.135 "params": { 00:17:38.135 "bdev_io_pool_size": 65535, 00:17:38.135 "bdev_io_cache_size": 256, 00:17:38.135 "bdev_auto_examine": true, 00:17:38.135 "iobuf_small_cache_size": 128, 00:17:38.135 "iobuf_large_cache_size": 16 00:17:38.135 } 00:17:38.135 }, 00:17:38.135 { 00:17:38.135 "method": "bdev_raid_set_options", 00:17:38.135 "params": { 00:17:38.135 "process_window_size_kb": 1024 00:17:38.135 } 00:17:38.135 }, 00:17:38.135 { 00:17:38.135 "method": "bdev_iscsi_set_options", 00:17:38.135 "params": { 00:17:38.135 "timeout_sec": 30 00:17:38.135 } 00:17:38.135 }, 00:17:38.135 { 00:17:38.135 "method": "bdev_nvme_set_options", 00:17:38.135 "params": { 00:17:38.135 "action_on_timeout": "none", 00:17:38.135 "timeout_us": 0, 00:17:38.135 "timeout_admin_us": 0, 00:17:38.135 "keep_alive_timeout_ms": 10000, 00:17:38.135 "arbitration_burst": 0, 00:17:38.135 "low_priority_weight": 0, 00:17:38.135 "medium_priority_weight": 0, 00:17:38.135 "high_priority_weight": 0, 00:17:38.135 "nvme_adminq_poll_period_us": 10000, 00:17:38.135 "nvme_ioq_poll_period_us": 0, 00:17:38.135 "io_queue_requests": 0, 00:17:38.135 "delay_cmd_submit": true, 00:17:38.135 "transport_retry_count": 4, 00:17:38.135 "bdev_retry_count": 3, 00:17:38.135 "transport_ack_timeout": 0, 00:17:38.135 "ctrlr_loss_timeout_sec": 0, 00:17:38.135 "reconnect_delay_sec": 0, 00:17:38.135 "fast_io_fail_timeout_sec": 0, 00:17:38.135 "disable_auto_failback": false, 00:17:38.135 "generate_uuids": false, 00:17:38.135 "transport_tos": 0, 00:17:38.135 "nvme_error_stat": false, 00:17:38.135 "rdma_srq_size": 0, 00:17:38.135 "io_path_stat": false, 00:17:38.135 "allow_accel_sequence": false, 00:17:38.135 "rdma_max_cq_size": 0, 00:17:38.135 "rdma_cm_event_timeout_ms": 0, 00:17:38.135 "dhchap_digests": [ 00:17:38.135 "sha256", 00:17:38.135 "sha384", 00:17:38.135 "sha512" 00:17:38.135 ], 00:17:38.135 "dhchap_dhgroups": [ 00:17:38.135 "null", 00:17:38.135 "ffdhe2048", 00:17:38.135 "ffdhe3072", 00:17:38.135 "ffdhe4096", 00:17:38.135 "ffdhe6144", 00:17:38.135 "ffdhe8192" 00:17:38.135 ] 00:17:38.135 } 00:17:38.135 }, 00:17:38.135 { 00:17:38.135 "method": "bdev_nvme_set_hotplug", 00:17:38.135 "params": { 00:17:38.135 "period_us": 100000, 00:17:38.135 "enable": false 00:17:38.135 } 00:17:38.135 }, 00:17:38.135 { 00:17:38.135 "method": "bdev_malloc_create", 00:17:38.135 "params": { 00:17:38.135 "name": "malloc0", 00:17:38.135 "num_blocks": 8192, 00:17:38.135 "block_size": 4096, 00:17:38.135 "physical_block_size": 4096, 00:17:38.135 "uuid": "1b7edb2f-876c-4141-a902-505b629e390d", 00:17:38.135 "optimal_io_boundary": 0 00:17:38.135 } 00:17:38.135 }, 00:17:38.135 { 00:17:38.135 "method": "bdev_wait_for_examine" 00:17:38.135 } 00:17:38.135 ] 00:17:38.135 }, 00:17:38.135 { 00:17:38.135 "subsystem": "nbd", 00:17:38.135 "config": [] 00:17:38.135 }, 00:17:38.135 { 00:17:38.135 "subsystem": "scheduler", 00:17:38.135 "config": [ 00:17:38.135 { 00:17:38.135 "method": "framework_set_scheduler", 00:17:38.135 "params": { 00:17:38.135 "name": "static" 00:17:38.135 } 00:17:38.135 } 00:17:38.135 ] 00:17:38.135 }, 00:17:38.135 { 00:17:38.135 "subsystem": "nvmf", 00:17:38.135 "config": [ 00:17:38.135 { 00:17:38.135 "method": "nvmf_set_config", 00:17:38.135 "params": { 00:17:38.135 "discovery_filter": "match_any", 00:17:38.135 "admin_cmd_passthru": { 00:17:38.135 "identify_ctrlr": false 00:17:38.135 } 00:17:38.135 } 00:17:38.135 }, 00:17:38.135 { 00:17:38.135 "method": "nvmf_set_max_subsystems", 00:17:38.135 "params": { 00:17:38.135 "max_subsystems": 1024 00:17:38.135 } 00:17:38.135 }, 00:17:38.135 { 00:17:38.135 "method": "nvmf_set_crdt", 00:17:38.135 "params": { 00:17:38.135 "crdt1": 0, 00:17:38.135 "crdt2": 0, 00:17:38.135 "crdt3": 0 00:17:38.135 } 00:17:38.135 }, 00:17:38.135 { 00:17:38.135 "method": "nvmf_create_transport", 00:17:38.135 "params": { 00:17:38.135 "trtype": "TCP", 00:17:38.135 "max_queue_depth": 128, 00:17:38.135 "max_io_qpairs_per_ctrlr": 127, 00:17:38.135 "in_capsule_data_size": 4096, 00:17:38.135 "max_io_size": 131072, 00:17:38.135 "io_unit_size": 131072, 00:17:38.135 "max_aq_depth": 128, 00:17:38.135 "num_shared_buffers": 511, 00:17:38.135 "buf_cache_size": 4294967295, 00:17:38.135 "dif_insert_or_strip": false, 00:17:38.135 "zcopy": false, 00:17:38.135 "c2h_success": false, 00:17:38.135 "sock_priority": 0, 00:17:38.135 "abort_timeout_sec": 1, 00:17:38.135 "ack_timeout": 0 00:17:38.135 } 00:17:38.135 }, 00:17:38.135 { 00:17:38.135 "method": "nvmf_create_subsystem", 00:17:38.135 "params": { 00:17:38.135 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.135 "allow_any_host": false, 00:17:38.135 "serial_number": "SPDK00000000000001", 00:17:38.135 "model_number": "SPDK bdev Controller", 00:17:38.135 "max_namespaces": 10, 00:17:38.135 "min_cntlid": 1, 00:17:38.135 "max_cntlid": 65519, 00:17:38.135 "ana_reporting": false 00:17:38.135 } 00:17:38.135 }, 00:17:38.135 { 00:17:38.135 "method": "nvmf_subsystem_add_host", 00:17:38.135 "params": { 00:17:38.135 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.135 "host": "nqn.2016-06.io.spdk:host1", 00:17:38.135 "psk": "/tmp/tmp.LsrurwB9IF" 00:17:38.135 } 00:17:38.135 }, 00:17:38.135 { 00:17:38.135 "method": "nvmf_subsystem_add_ns", 00:17:38.135 "params": { 00:17:38.135 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.135 "namespace": { 00:17:38.135 "nsid": 1, 00:17:38.135 "bdev_name": "malloc0", 00:17:38.135 "nguid": "1B7EDB2F876C4141A902505B629E390D", 00:17:38.135 "uuid": "1b7edb2f-876c-4141-a902-505b629e390d", 00:17:38.135 "no_auto_visible": false 00:17:38.135 } 00:17:38.135 } 00:17:38.135 }, 00:17:38.135 { 00:17:38.135 "method": "nvmf_subsystem_add_listener", 00:17:38.135 "params": { 00:17:38.135 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.135 "listen_address": { 00:17:38.135 "trtype": "TCP", 00:17:38.135 "adrfam": "IPv4", 00:17:38.135 "traddr": "10.0.0.2", 00:17:38.135 "trsvcid": "4420" 00:17:38.135 }, 00:17:38.135 "secure_channel": true 00:17:38.135 } 00:17:38.135 } 00:17:38.135 ] 00:17:38.135 } 00:17:38.135 ] 00:17:38.135 }' 00:17:38.135 22:09:20 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:38.699 22:09:20 -- target/tls.sh@197 -- # bdevperfconf='{ 00:17:38.699 "subsystems": [ 00:17:38.699 { 00:17:38.699 "subsystem": "keyring", 00:17:38.699 "config": [] 00:17:38.699 }, 00:17:38.699 { 00:17:38.699 "subsystem": "iobuf", 00:17:38.699 "config": [ 00:17:38.699 { 00:17:38.699 "method": "iobuf_set_options", 00:17:38.699 "params": { 00:17:38.699 "small_pool_count": 8192, 00:17:38.699 "large_pool_count": 1024, 00:17:38.699 "small_bufsize": 8192, 00:17:38.699 "large_bufsize": 135168 00:17:38.699 } 00:17:38.699 } 00:17:38.699 ] 00:17:38.699 }, 00:17:38.699 { 00:17:38.699 "subsystem": "sock", 00:17:38.699 "config": [ 00:17:38.699 { 00:17:38.699 "method": "sock_impl_set_options", 00:17:38.699 "params": { 00:17:38.699 "impl_name": "posix", 00:17:38.699 "recv_buf_size": 2097152, 00:17:38.699 "send_buf_size": 2097152, 00:17:38.699 "enable_recv_pipe": true, 00:17:38.699 "enable_quickack": false, 00:17:38.699 "enable_placement_id": 0, 00:17:38.699 "enable_zerocopy_send_server": true, 00:17:38.699 "enable_zerocopy_send_client": false, 00:17:38.699 "zerocopy_threshold": 0, 00:17:38.699 "tls_version": 0, 00:17:38.699 "enable_ktls": false 00:17:38.699 } 00:17:38.699 }, 00:17:38.699 { 00:17:38.699 "method": "sock_impl_set_options", 00:17:38.699 "params": { 00:17:38.699 "impl_name": "ssl", 00:17:38.699 "recv_buf_size": 4096, 00:17:38.699 "send_buf_size": 4096, 00:17:38.699 "enable_recv_pipe": true, 00:17:38.699 "enable_quickack": false, 00:17:38.699 "enable_placement_id": 0, 00:17:38.699 "enable_zerocopy_send_server": true, 00:17:38.699 "enable_zerocopy_send_client": false, 00:17:38.699 "zerocopy_threshold": 0, 00:17:38.699 "tls_version": 0, 00:17:38.699 "enable_ktls": false 00:17:38.699 } 00:17:38.699 } 00:17:38.699 ] 00:17:38.699 }, 00:17:38.699 { 00:17:38.699 "subsystem": "vmd", 00:17:38.699 "config": [] 00:17:38.699 }, 00:17:38.699 { 00:17:38.699 "subsystem": "accel", 00:17:38.699 "config": [ 00:17:38.699 { 00:17:38.699 "method": "accel_set_options", 00:17:38.699 "params": { 00:17:38.699 "small_cache_size": 128, 00:17:38.699 "large_cache_size": 16, 00:17:38.699 "task_count": 2048, 00:17:38.699 "sequence_count": 2048, 00:17:38.699 "buf_count": 2048 00:17:38.699 } 00:17:38.699 } 00:17:38.699 ] 00:17:38.699 }, 00:17:38.699 { 00:17:38.699 "subsystem": "bdev", 00:17:38.699 "config": [ 00:17:38.699 { 00:17:38.699 "method": "bdev_set_options", 00:17:38.699 "params": { 00:17:38.699 "bdev_io_pool_size": 65535, 00:17:38.699 "bdev_io_cache_size": 256, 00:17:38.699 "bdev_auto_examine": true, 00:17:38.699 "iobuf_small_cache_size": 128, 00:17:38.699 "iobuf_large_cache_size": 16 00:17:38.699 } 00:17:38.699 }, 00:17:38.699 { 00:17:38.699 "method": "bdev_raid_set_options", 00:17:38.699 "params": { 00:17:38.699 "process_window_size_kb": 1024 00:17:38.699 } 00:17:38.699 }, 00:17:38.699 { 00:17:38.699 "method": "bdev_iscsi_set_options", 00:17:38.699 "params": { 00:17:38.699 "timeout_sec": 30 00:17:38.699 } 00:17:38.699 }, 00:17:38.699 { 00:17:38.699 "method": "bdev_nvme_set_options", 00:17:38.699 "params": { 00:17:38.699 "action_on_timeout": "none", 00:17:38.699 "timeout_us": 0, 00:17:38.699 "timeout_admin_us": 0, 00:17:38.699 "keep_alive_timeout_ms": 10000, 00:17:38.699 "arbitration_burst": 0, 00:17:38.699 "low_priority_weight": 0, 00:17:38.699 "medium_priority_weight": 0, 00:17:38.699 "high_priority_weight": 0, 00:17:38.699 "nvme_adminq_poll_period_us": 10000, 00:17:38.699 "nvme_ioq_poll_period_us": 0, 00:17:38.699 "io_queue_requests": 512, 00:17:38.699 "delay_cmd_submit": true, 00:17:38.699 "transport_retry_count": 4, 00:17:38.699 "bdev_retry_count": 3, 00:17:38.699 "transport_ack_timeout": 0, 00:17:38.699 "ctrlr_loss_timeout_sec": 0, 00:17:38.699 "reconnect_delay_sec": 0, 00:17:38.699 "fast_io_fail_timeout_sec": 0, 00:17:38.699 "disable_auto_failback": false, 00:17:38.699 "generate_uuids": false, 00:17:38.699 "transport_tos": 0, 00:17:38.699 "nvme_error_stat": false, 00:17:38.699 "rdma_srq_size": 0, 00:17:38.699 "io_path_stat": false, 00:17:38.699 "allow_accel_sequence": false, 00:17:38.699 "rdma_max_cq_size": 0, 00:17:38.700 "rdma_cm_event_timeout_ms": 0, 00:17:38.700 "dhchap_digests": [ 00:17:38.700 "sha256", 00:17:38.700 "sha384", 00:17:38.700 "sha512" 00:17:38.700 ], 00:17:38.700 "dhchap_dhgroups": [ 00:17:38.700 "null", 00:17:38.700 "ffdhe2048", 00:17:38.700 "ffdhe3072", 00:17:38.700 "ffdhe4096", 00:17:38.700 "ffdhe6144", 00:17:38.700 "ffdhe8192" 00:17:38.700 ] 00:17:38.700 } 00:17:38.700 }, 00:17:38.700 { 00:17:38.700 "method": "bdev_nvme_attach_controller", 00:17:38.700 "params": { 00:17:38.700 "name": "TLSTEST", 00:17:38.700 "trtype": "TCP", 00:17:38.700 "adrfam": "IPv4", 00:17:38.700 "traddr": "10.0.0.2", 00:17:38.700 "trsvcid": "4420", 00:17:38.700 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.700 "prchk_reftag": false, 00:17:38.700 "prchk_guard": false, 00:17:38.700 "ctrlr_loss_timeout_sec": 0, 00:17:38.700 "reconnect_delay_sec": 0, 00:17:38.700 "fast_io_fail_timeout_sec": 0, 00:17:38.700 "psk": "/tmp/tmp.LsrurwB9IF", 00:17:38.700 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:38.700 "hdgst": false, 00:17:38.700 "ddgst": false 00:17:38.700 } 00:17:38.700 }, 00:17:38.700 { 00:17:38.700 "method": "bdev_nvme_set_hotplug", 00:17:38.700 "params": { 00:17:38.700 "period_us": 100000, 00:17:38.700 "enable": false 00:17:38.700 } 00:17:38.700 }, 00:17:38.700 { 00:17:38.700 "method": "bdev_wait_for_examine" 00:17:38.700 } 00:17:38.700 ] 00:17:38.700 }, 00:17:38.700 { 00:17:38.700 "subsystem": "nbd", 00:17:38.700 "config": [] 00:17:38.700 } 00:17:38.700 ] 00:17:38.700 }' 00:17:38.700 22:09:20 -- target/tls.sh@199 -- # killprocess 3962102 00:17:38.700 22:09:20 -- common/autotest_common.sh@936 -- # '[' -z 3962102 ']' 00:17:38.700 22:09:20 -- common/autotest_common.sh@940 -- # kill -0 3962102 00:17:38.700 22:09:20 -- common/autotest_common.sh@941 -- # uname 00:17:38.700 22:09:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:38.700 22:09:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3962102 00:17:38.700 22:09:20 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:38.700 22:09:20 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:38.700 22:09:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3962102' 00:17:38.700 killing process with pid 3962102 00:17:38.700 22:09:20 -- common/autotest_common.sh@955 -- # kill 3962102 00:17:38.700 Received shutdown signal, test time was about 10.000000 seconds 00:17:38.700 00:17:38.700 Latency(us) 00:17:38.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.700 =================================================================================================================== 00:17:38.700 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:38.700 [2024-04-24 22:09:20.765213] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:38.700 22:09:20 -- common/autotest_common.sh@960 -- # wait 3962102 00:17:38.958 22:09:21 -- target/tls.sh@200 -- # killprocess 3961814 00:17:38.958 22:09:21 -- common/autotest_common.sh@936 -- # '[' -z 3961814 ']' 00:17:38.958 22:09:21 -- common/autotest_common.sh@940 -- # kill -0 3961814 00:17:38.958 22:09:21 -- common/autotest_common.sh@941 -- # uname 00:17:38.958 22:09:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:38.958 22:09:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3961814 00:17:38.958 22:09:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:38.958 22:09:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:38.958 22:09:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3961814' 00:17:38.958 killing process with pid 3961814 00:17:38.958 22:09:21 -- common/autotest_common.sh@955 -- # kill 3961814 00:17:38.958 [2024-04-24 22:09:21.061206] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:38.958 [2024-04-24 22:09:21.061268] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:38.958 22:09:21 -- common/autotest_common.sh@960 -- # wait 3961814 00:17:39.217 22:09:21 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:39.217 22:09:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:39.217 22:09:21 -- target/tls.sh@203 -- # echo '{ 00:17:39.217 "subsystems": [ 00:17:39.217 { 00:17:39.217 "subsystem": "keyring", 00:17:39.217 "config": [] 00:17:39.217 }, 00:17:39.217 { 00:17:39.217 "subsystem": "iobuf", 00:17:39.217 "config": [ 00:17:39.217 { 00:17:39.217 "method": "iobuf_set_options", 00:17:39.217 "params": { 00:17:39.217 "small_pool_count": 8192, 00:17:39.217 "large_pool_count": 1024, 00:17:39.217 "small_bufsize": 8192, 00:17:39.217 "large_bufsize": 135168 00:17:39.217 } 00:17:39.217 } 00:17:39.217 ] 00:17:39.217 }, 00:17:39.217 { 00:17:39.217 "subsystem": "sock", 00:17:39.217 "config": [ 00:17:39.217 { 00:17:39.217 "method": "sock_impl_set_options", 00:17:39.217 "params": { 00:17:39.217 "impl_name": "posix", 00:17:39.217 "recv_buf_size": 2097152, 00:17:39.217 "send_buf_size": 2097152, 00:17:39.217 "enable_recv_pipe": true, 00:17:39.217 "enable_quickack": false, 00:17:39.217 "enable_placement_id": 0, 00:17:39.217 "enable_zerocopy_send_server": true, 00:17:39.217 "enable_zerocopy_send_client": false, 00:17:39.217 "zerocopy_threshold": 0, 00:17:39.217 "tls_version": 0, 00:17:39.217 "enable_ktls": false 00:17:39.217 } 00:17:39.217 }, 00:17:39.217 { 00:17:39.217 "method": "sock_impl_set_options", 00:17:39.217 "params": { 00:17:39.217 "impl_name": "ssl", 00:17:39.217 "recv_buf_size": 4096, 00:17:39.217 "send_buf_size": 4096, 00:17:39.217 "enable_recv_pipe": true, 00:17:39.217 "enable_quickack": false, 00:17:39.217 "enable_placement_id": 0, 00:17:39.217 "enable_zerocopy_send_server": true, 00:17:39.217 "enable_zerocopy_send_client": false, 00:17:39.217 "zerocopy_threshold": 0, 00:17:39.217 "tls_version": 0, 00:17:39.217 "enable_ktls": false 00:17:39.217 } 00:17:39.217 } 00:17:39.217 ] 00:17:39.217 }, 00:17:39.217 { 00:17:39.217 "subsystem": "vmd", 00:17:39.217 "config": [] 00:17:39.217 }, 00:17:39.217 { 00:17:39.217 "subsystem": "accel", 00:17:39.217 "config": [ 00:17:39.217 { 00:17:39.217 "method": "accel_set_options", 00:17:39.217 "params": { 00:17:39.217 "small_cache_size": 128, 00:17:39.217 "large_cache_size": 16, 00:17:39.217 "task_count": 2048, 00:17:39.217 "sequence_count": 2048, 00:17:39.217 "buf_count": 2048 00:17:39.217 } 00:17:39.217 } 00:17:39.217 ] 00:17:39.217 }, 00:17:39.217 { 00:17:39.217 "subsystem": "bdev", 00:17:39.217 "config": [ 00:17:39.217 { 00:17:39.217 "method": "bdev_set_options", 00:17:39.217 "params": { 00:17:39.217 "bdev_io_pool_size": 65535, 00:17:39.217 "bdev_io_cache_size": 256, 00:17:39.217 "bdev_auto_examine": true, 00:17:39.217 "iobuf_small_cache_size": 128, 00:17:39.217 "iobuf_large_cache_size": 16 00:17:39.217 } 00:17:39.217 }, 00:17:39.217 { 00:17:39.217 "method": "bdev_raid_set_options", 00:17:39.217 "params": { 00:17:39.217 "process_window_size_kb": 1024 00:17:39.217 } 00:17:39.217 }, 00:17:39.217 { 00:17:39.217 "method": "bdev_iscsi_set_options", 00:17:39.217 "params": { 00:17:39.217 "timeout_sec": 30 00:17:39.217 } 00:17:39.217 }, 00:17:39.217 { 00:17:39.217 "method": "bdev_nvme_set_options", 00:17:39.217 "params": { 00:17:39.217 "action_on_timeout": "none", 00:17:39.217 "timeout_us": 0, 00:17:39.217 "timeout_admin_us": 0, 00:17:39.217 "keep_alive_timeout_ms": 10000, 00:17:39.217 "arbitration_burst": 0, 00:17:39.217 "low_priority_weight": 0, 00:17:39.217 "medium_priority_weight": 0, 00:17:39.217 "high_priority_weight": 0, 00:17:39.217 "nvme_adminq_poll_period_us": 10000, 00:17:39.217 "nvme_ioq_poll_period_us": 0, 00:17:39.217 "io_queue_requests": 0, 00:17:39.217 "delay_cmd_submit": true, 00:17:39.217 "transport_retry_count": 4, 00:17:39.217 "bdev_retry_count": 3, 00:17:39.217 "transport_ack_timeout": 0, 00:17:39.217 "ctrlr_loss_timeout_sec": 0, 00:17:39.217 "reconnect_delay_sec": 0, 00:17:39.217 "fast_io_fail_timeout_sec": 0, 00:17:39.217 "disable_auto_failback": false, 00:17:39.217 "generate_uuids": false, 00:17:39.217 "transport_tos": 0, 00:17:39.217 "nvme_error_stat": false, 00:17:39.217 "rdma_srq_size": 0, 00:17:39.217 "io_path_stat": false, 00:17:39.217 "allow_accel_sequence": false, 00:17:39.217 "rdma_max_cq_size": 0, 00:17:39.217 "rdma_cm_event_timeout_ms": 0, 00:17:39.217 "dhchap_digests": [ 00:17:39.217 "sha256", 00:17:39.217 "sha384", 00:17:39.217 "sha512" 00:17:39.217 ], 00:17:39.217 "dhchap_dhgroups": [ 00:17:39.217 "null", 00:17:39.217 "ffdhe2048", 00:17:39.217 "ffdhe3072", 00:17:39.217 "ffdhe4096", 00:17:39.217 "ffdhe6144", 00:17:39.217 "ffdhe8192" 00:17:39.217 ] 00:17:39.217 } 00:17:39.217 }, 00:17:39.217 { 00:17:39.217 "method": "bdev_nvme_set_hotplug", 00:17:39.217 "params": { 00:17:39.217 "period_us": 100000, 00:17:39.217 "enable": false 00:17:39.217 } 00:17:39.217 }, 00:17:39.217 { 00:17:39.217 "method": "bdev_malloc_create", 00:17:39.217 "params": { 00:17:39.217 "name": "malloc0", 00:17:39.217 "num_blocks": 8192, 00:17:39.217 "block_size": 4096, 00:17:39.217 "physical_block_size": 4096, 00:17:39.217 "uuid": "1b7edb2f-876c-4141-a902-505b629e390d", 00:17:39.217 "optimal_io_boundary": 0 00:17:39.217 } 00:17:39.217 }, 00:17:39.217 { 00:17:39.217 "method": "bdev_wait_for_examine" 00:17:39.217 } 00:17:39.217 ] 00:17:39.217 }, 00:17:39.217 { 00:17:39.217 "subsystem": "nbd", 00:17:39.217 "config": [] 00:17:39.217 }, 00:17:39.217 { 00:17:39.217 "subsystem": "scheduler", 00:17:39.217 "config": [ 00:17:39.217 { 00:17:39.217 "method": "framework_set_scheduler", 00:17:39.217 "params": { 00:17:39.217 "name": "static" 00:17:39.217 } 00:17:39.217 } 00:17:39.217 ] 00:17:39.217 }, 00:17:39.217 { 00:17:39.217 "subsystem": "nvmf", 00:17:39.217 "config": [ 00:17:39.217 { 00:17:39.217 "method": "nvmf_set_config", 00:17:39.217 "params": { 00:17:39.217 "discovery_filter": "match_any", 00:17:39.217 "admin_cmd_passthru": { 00:17:39.217 "identify_ctrlr": false 00:17:39.217 } 00:17:39.217 } 00:17:39.217 }, 00:17:39.217 { 00:17:39.217 "method": "nvmf_set_max_subsystems", 00:17:39.217 "params": { 00:17:39.217 "max_subsystems": 1024 00:17:39.217 } 00:17:39.217 }, 00:17:39.217 { 00:17:39.217 "method": "nvmf_set_crdt", 00:17:39.217 "params": { 00:17:39.217 "crdt1": 0, 00:17:39.217 "crdt2": 0, 00:17:39.217 "crdt3": 0 00:17:39.217 } 00:17:39.217 }, 00:17:39.217 { 00:17:39.217 "method": "nvmf_create_transport", 00:17:39.217 "params": { 00:17:39.217 "trtype": "TCP", 00:17:39.217 "max_queue_depth": 128, 00:17:39.217 "max_io_qpairs_per_ctrlr": 127, 00:17:39.217 "in_capsule_data_size": 4096, 00:17:39.217 "max_io_size": 131072, 00:17:39.217 "io_unit_size": 131072, 00:17:39.217 "max_aq_depth": 128, 00:17:39.217 "num_shared_buffers": 511, 00:17:39.217 "buf_cache_size": 4294967295, 00:17:39.217 "dif_insert_or_strip": false, 00:17:39.217 "zcopy": false, 00:17:39.217 "c2h_success": false, 00:17:39.217 "sock_priority": 0, 00:17:39.217 "abort_timeout_sec": 1, 00:17:39.217 "ack_timeout": 0 00:17:39.217 } 00:17:39.217 }, 00:17:39.217 { 00:17:39.217 "method": "nvmf_create_subsystem", 00:17:39.217 "params": { 00:17:39.217 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.217 "allow_any_host": false, 00:17:39.217 "serial_number": "SPDK00000000000001", 00:17:39.217 "model_number": "SPDK bdev Controller", 00:17:39.217 "max_namespaces": 10, 00:17:39.218 "min_cntlid": 1, 00:17:39.218 "max_cntlid": 65519, 00:17:39.218 "ana_reporting": false 00:17:39.218 } 00:17:39.218 }, 00:17:39.218 { 00:17:39.218 "method": "nvmf_subsystem_add_host", 00:17:39.218 "params": { 00:17:39.218 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.218 "host": "nqn.2016-06.io.spdk:host1", 00:17:39.218 "psk": "/tmp/tmp.LsrurwB9IF" 00:17:39.218 } 00:17:39.218 }, 00:17:39.218 { 00:17:39.218 "method": "nvmf_subsystem_add_ns", 00:17:39.218 "params": { 00:17:39.218 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.218 "namespace": { 00:17:39.218 "nsid": 1, 00:17:39.218 "bdev_name": "malloc0", 00:17:39.218 "nguid": "1B7EDB2F876C4141A902505B629E390D", 00:17:39.218 "uuid": "1b7edb2f-876c-4141-a902-505b629e390d", 00:17:39.218 "no_auto_visible": false 00:17:39.218 } 00:17:39.218 } 00:17:39.218 }, 00:17:39.218 { 00:17:39.218 "method": "nvmf_subsystem_add_listener", 00:17:39.218 "params": { 00:17:39.218 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.218 "listen_address": { 00:17:39.218 "trtype": "TCP", 00:17:39.218 "adrfam": "IPv4", 00:17:39.218 "traddr": "10.0.0.2", 00:17:39.218 "trsvcid": "4420" 00:17:39.218 }, 00:17:39.218 "secure_channel": true 00:17:39.218 } 00:17:39.218 } 00:17:39.218 ] 00:17:39.218 } 00:17:39.218 ] 00:17:39.218 }' 00:17:39.218 22:09:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:39.218 22:09:21 -- common/autotest_common.sh@10 -- # set +x 00:17:39.218 22:09:21 -- nvmf/common.sh@470 -- # nvmfpid=3962506 00:17:39.218 22:09:21 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:39.218 22:09:21 -- nvmf/common.sh@471 -- # waitforlisten 3962506 00:17:39.218 22:09:21 -- common/autotest_common.sh@817 -- # '[' -z 3962506 ']' 00:17:39.218 22:09:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.218 22:09:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:39.218 22:09:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.218 22:09:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:39.218 22:09:21 -- common/autotest_common.sh@10 -- # set +x 00:17:39.218 [2024-04-24 22:09:21.405130] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:39.218 [2024-04-24 22:09:21.405226] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.218 EAL: No free 2048 kB hugepages reported on node 1 00:17:39.477 [2024-04-24 22:09:21.480305] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.477 [2024-04-24 22:09:21.613319] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.477 [2024-04-24 22:09:21.613410] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.477 [2024-04-24 22:09:21.613433] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.477 [2024-04-24 22:09:21.613472] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.477 [2024-04-24 22:09:21.613484] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.477 [2024-04-24 22:09:21.613586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.736 [2024-04-24 22:09:21.862359] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.736 [2024-04-24 22:09:21.878299] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:39.736 [2024-04-24 22:09:21.894333] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:39.736 [2024-04-24 22:09:21.894457] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:39.736 [2024-04-24 22:09:21.907667] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.302 22:09:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:40.302 22:09:22 -- common/autotest_common.sh@850 -- # return 0 00:17:40.302 22:09:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:40.302 22:09:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:40.302 22:09:22 -- common/autotest_common.sh@10 -- # set +x 00:17:40.302 22:09:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.302 22:09:22 -- target/tls.sh@207 -- # bdevperf_pid=3962657 00:17:40.302 22:09:22 -- target/tls.sh@208 -- # waitforlisten 3962657 /var/tmp/bdevperf.sock 00:17:40.302 22:09:22 -- common/autotest_common.sh@817 -- # '[' -z 3962657 ']' 00:17:40.302 22:09:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:40.302 22:09:22 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:40.302 22:09:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:40.302 22:09:22 -- target/tls.sh@204 -- # echo '{ 00:17:40.302 "subsystems": [ 00:17:40.302 { 00:17:40.302 "subsystem": "keyring", 00:17:40.302 "config": [] 00:17:40.302 }, 00:17:40.302 { 00:17:40.302 "subsystem": "iobuf", 00:17:40.302 "config": [ 00:17:40.302 { 00:17:40.302 "method": "iobuf_set_options", 00:17:40.302 "params": { 00:17:40.302 "small_pool_count": 8192, 00:17:40.302 "large_pool_count": 1024, 00:17:40.302 "small_bufsize": 8192, 00:17:40.302 "large_bufsize": 135168 00:17:40.302 } 00:17:40.302 } 00:17:40.302 ] 00:17:40.302 }, 00:17:40.302 { 00:17:40.302 "subsystem": "sock", 00:17:40.302 "config": [ 00:17:40.302 { 00:17:40.302 "method": "sock_impl_set_options", 00:17:40.302 "params": { 00:17:40.302 "impl_name": "posix", 00:17:40.302 "recv_buf_size": 2097152, 00:17:40.302 "send_buf_size": 2097152, 00:17:40.302 "enable_recv_pipe": true, 00:17:40.302 "enable_quickack": false, 00:17:40.302 "enable_placement_id": 0, 00:17:40.302 "enable_zerocopy_send_server": true, 00:17:40.302 "enable_zerocopy_send_client": false, 00:17:40.302 "zerocopy_threshold": 0, 00:17:40.302 "tls_version": 0, 00:17:40.302 "enable_ktls": false 00:17:40.302 } 00:17:40.302 }, 00:17:40.302 { 00:17:40.302 "method": "sock_impl_set_options", 00:17:40.302 "params": { 00:17:40.302 "impl_name": "ssl", 00:17:40.302 "recv_buf_size": 4096, 00:17:40.302 "send_buf_size": 4096, 00:17:40.302 "enable_recv_pipe": true, 00:17:40.302 "enable_quickack": false, 00:17:40.302 "enable_placement_id": 0, 00:17:40.302 "enable_zerocopy_send_server": true, 00:17:40.302 "enable_zerocopy_send_client": false, 00:17:40.302 "zerocopy_threshold": 0, 00:17:40.302 "tls_version": 0, 00:17:40.302 "enable_ktls": false 00:17:40.302 } 00:17:40.302 } 00:17:40.302 ] 00:17:40.302 }, 00:17:40.302 { 00:17:40.302 "subsystem": "vmd", 00:17:40.302 "config": [] 00:17:40.302 }, 00:17:40.302 { 00:17:40.302 "subsystem": "accel", 00:17:40.302 "config": [ 00:17:40.302 { 00:17:40.302 "method": "accel_set_options", 00:17:40.302 "params": { 00:17:40.302 "small_cache_size": 128, 00:17:40.302 "large_cache_size": 16, 00:17:40.302 "task_count": 2048, 00:17:40.302 "sequence_count": 2048, 00:17:40.302 "buf_count": 2048 00:17:40.302 } 00:17:40.302 } 00:17:40.302 ] 00:17:40.302 }, 00:17:40.302 { 00:17:40.302 "subsystem": "bdev", 00:17:40.302 "config": [ 00:17:40.302 { 00:17:40.302 "method": "bdev_set_options", 00:17:40.302 "params": { 00:17:40.303 "bdev_io_pool_size": 65535, 00:17:40.303 "bdev_io_cache_size": 256, 00:17:40.303 "bdev_auto_examine": true, 00:17:40.303 "iobuf_small_cache_size": 128, 00:17:40.303 "iobuf_large_cache_size": 16 00:17:40.303 } 00:17:40.303 }, 00:17:40.303 { 00:17:40.303 "method": "bdev_raid_set_options", 00:17:40.303 "params": { 00:17:40.303 "process_window_size_kb": 1024 00:17:40.303 } 00:17:40.303 }, 00:17:40.303 { 00:17:40.303 "method": "bdev_iscsi_set_options", 00:17:40.303 "params": { 00:17:40.303 "timeout_sec": 30 00:17:40.303 } 00:17:40.303 }, 00:17:40.303 { 00:17:40.303 "method": "bdev_nvme_set_options", 00:17:40.303 "params": { 00:17:40.303 "action_on_timeout": "none", 00:17:40.303 "timeout_us": 0, 00:17:40.303 "timeout_admin_us": 0, 00:17:40.303 "keep_alive_timeout_ms": 10000, 00:17:40.303 "arbitration_burst": 0, 00:17:40.303 "low_priority_weight": 0, 00:17:40.303 "medium_priority_weight": 0, 00:17:40.303 "high_priority_weight": 0, 00:17:40.303 "nvme_adminq_poll_period_us": 10000, 00:17:40.303 "nvme_ioq_poll_period_us": 0, 00:17:40.303 "io_queue_requests": 512, 00:17:40.303 "delay_cmd_submit": true, 00:17:40.303 "transport_retry_count": 4, 00:17:40.303 "bdev_retry_count": 3, 00:17:40.303 "transport_ack_timeout": 0, 00:17:40.303 "ctrlr_loss_timeout_sec": 0, 00:17:40.303 "reconnect_delay_sec": 0, 00:17:40.303 "fast_io_fail_timeout_sec": 0, 00:17:40.303 "disable_auto_failback": false, 00:17:40.303 "generate_uuids": false, 00:17:40.303 "transport_tos": 0, 00:17:40.303 "nvme_error_stat": false, 00:17:40.303 "rdma_srq_size": 0, 00:17:40.303 "io_path_stat": false, 00:17:40.303 "allow_accel_sequence": false, 00:17:40.303 "rdma_max_cq_size": 0, 00:17:40.303 "rdma_cm_event_timeout_ms": 0, 00:17:40.303 "dhchap_digests": [ 00:17:40.303 "sha256", 00:17:40.303 "sha384", 00:17:40.303 "sha512" 00:17:40.303 ], 00:17:40.303 "dhchap_dhgroups": [ 00:17:40.303 "null", 00:17:40.303 "ffdhe2048", 00:17:40.303 "ffdhe3072", 00:17:40.303 "ffdhe4096", 00:17:40.303 "ffdhe6144", 00:17:40.303 "ffdhe8192" 00:17:40.303 ] 00:17:40.303 } 00:17:40.303 }, 00:17:40.303 { 00:17:40.303 "method": "bdev_nvme_attach_controller", 00:17:40.303 "params": { 00:17:40.303 "name": "TLSTEST", 00:17:40.303 "trtype": "TCP", 00:17:40.303 "adrfam": "IPv4", 00:17:40.303 "traddr": "10.0.0.2", 00:17:40.303 "trsvcid": "4420", 00:17:40.303 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.303 "prchk_reftag": false, 00:17:40.303 "prchk_guard": false, 00:17:40.303 "ctrlr_loss_timeout_sec": 0, 00:17:40.303 "reconnect_delay_sec": 0, 00:17:40.303 "fast_io_fail_timeout_sec": 0, 00:17:40.303 "psk": "/tmp/tmp.LsrurwB9IF", 00:17:40.303 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:40.303 "hdgst": false, 00:17:40.303 "ddgst": false 00:17:40.303 } 00:17:40.303 }, 00:17:40.303 { 00:17:40.303 "method": "bdev_nvme_set_hotplug", 00:17:40.303 "params": { 00:17:40.303 "period_us": 100000, 00:17:40.303 "enable": false 00:17:40.303 } 00:17:40.303 }, 00:17:40.303 { 00:17:40.303 "method": "bdev_wait_for_examine" 00:17:40.303 } 00:17:40.303 ] 00:17:40.303 }, 00:17:40.303 { 00:17:40.303 "subsystem": "nbd", 00:17:40.303 "config": [] 00:17:40.303 } 00:17:40.303 ] 00:17:40.303 }' 00:17:40.303 22:09:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:40.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:40.303 22:09:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:40.303 22:09:22 -- common/autotest_common.sh@10 -- # set +x 00:17:40.303 [2024-04-24 22:09:22.493599] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:40.303 [2024-04-24 22:09:22.493688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3962657 ] 00:17:40.303 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.560 [2024-04-24 22:09:22.562660] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.560 [2024-04-24 22:09:22.681765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.817 [2024-04-24 22:09:22.852030] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:40.817 [2024-04-24 22:09:22.852188] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:41.382 22:09:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:41.382 22:09:23 -- common/autotest_common.sh@850 -- # return 0 00:17:41.382 22:09:23 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:41.641 Running I/O for 10 seconds... 00:17:51.618 00:17:51.618 Latency(us) 00:17:51.618 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.618 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:51.618 Verification LBA range: start 0x0 length 0x2000 00:17:51.618 TLSTESTn1 : 10.03 2966.64 11.59 0.00 0.00 43052.77 6893.42 65633.09 00:17:51.618 =================================================================================================================== 00:17:51.618 Total : 2966.64 11.59 0.00 0.00 43052.77 6893.42 65633.09 00:17:51.618 0 00:17:51.618 22:09:33 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:51.618 22:09:33 -- target/tls.sh@214 -- # killprocess 3962657 00:17:51.618 22:09:33 -- common/autotest_common.sh@936 -- # '[' -z 3962657 ']' 00:17:51.618 22:09:33 -- common/autotest_common.sh@940 -- # kill -0 3962657 00:17:51.618 22:09:33 -- common/autotest_common.sh@941 -- # uname 00:17:51.618 22:09:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:51.618 22:09:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3962657 00:17:51.618 22:09:33 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:51.618 22:09:33 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:51.618 22:09:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3962657' 00:17:51.618 killing process with pid 3962657 00:17:51.618 22:09:33 -- common/autotest_common.sh@955 -- # kill 3962657 00:17:51.618 Received shutdown signal, test time was about 10.000000 seconds 00:17:51.618 00:17:51.618 Latency(us) 00:17:51.618 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.618 =================================================================================================================== 00:17:51.618 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:51.618 [2024-04-24 22:09:33.786186] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:51.618 22:09:33 -- common/autotest_common.sh@960 -- # wait 3962657 00:17:51.876 22:09:34 -- target/tls.sh@215 -- # killprocess 3962506 00:17:51.876 22:09:34 -- common/autotest_common.sh@936 -- # '[' -z 3962506 ']' 00:17:51.876 22:09:34 -- common/autotest_common.sh@940 -- # kill -0 3962506 00:17:51.876 22:09:34 -- common/autotest_common.sh@941 -- # uname 00:17:51.876 22:09:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:51.876 22:09:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3962506 00:17:51.876 22:09:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:51.876 22:09:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:51.876 22:09:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3962506' 00:17:51.876 killing process with pid 3962506 00:17:51.876 22:09:34 -- common/autotest_common.sh@955 -- # kill 3962506 00:17:51.876 [2024-04-24 22:09:34.108905] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:51.876 [2024-04-24 22:09:34.108968] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:51.876 22:09:34 -- common/autotest_common.sh@960 -- # wait 3962506 00:17:52.443 22:09:34 -- target/tls.sh@218 -- # nvmfappstart 00:17:52.443 22:09:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:52.443 22:09:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:52.443 22:09:34 -- common/autotest_common.sh@10 -- # set +x 00:17:52.443 22:09:34 -- nvmf/common.sh@470 -- # nvmfpid=3963991 00:17:52.443 22:09:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:52.443 22:09:34 -- nvmf/common.sh@471 -- # waitforlisten 3963991 00:17:52.443 22:09:34 -- common/autotest_common.sh@817 -- # '[' -z 3963991 ']' 00:17:52.443 22:09:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.443 22:09:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:52.443 22:09:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.443 22:09:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:52.443 22:09:34 -- common/autotest_common.sh@10 -- # set +x 00:17:52.443 [2024-04-24 22:09:34.476082] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:52.443 [2024-04-24 22:09:34.476182] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.443 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.443 [2024-04-24 22:09:34.552133] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.443 [2024-04-24 22:09:34.675979] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.443 [2024-04-24 22:09:34.676042] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.443 [2024-04-24 22:09:34.676058] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:52.443 [2024-04-24 22:09:34.676071] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:52.443 [2024-04-24 22:09:34.676082] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.443 [2024-04-24 22:09:34.676124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.701 22:09:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:52.701 22:09:34 -- common/autotest_common.sh@850 -- # return 0 00:17:52.701 22:09:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:52.701 22:09:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:52.701 22:09:34 -- common/autotest_common.sh@10 -- # set +x 00:17:52.701 22:09:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.701 22:09:34 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.LsrurwB9IF 00:17:52.701 22:09:34 -- target/tls.sh@49 -- # local key=/tmp/tmp.LsrurwB9IF 00:17:52.701 22:09:34 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:52.959 [2024-04-24 22:09:35.142478] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:52.959 22:09:35 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:53.526 22:09:35 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:53.526 [2024-04-24 22:09:35.748031] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:53.526 [2024-04-24 22:09:35.748156] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:53.526 [2024-04-24 22:09:35.748402] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:53.526 22:09:35 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:53.783 malloc0 00:17:54.041 22:09:36 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:54.299 22:09:36 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LsrurwB9IF 00:17:54.557 [2024-04-24 22:09:36.598773] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:54.557 22:09:36 -- target/tls.sh@222 -- # bdevperf_pid=3964275 00:17:54.557 22:09:36 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:54.557 22:09:36 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:54.557 22:09:36 -- target/tls.sh@225 -- # waitforlisten 3964275 /var/tmp/bdevperf.sock 00:17:54.557 22:09:36 -- common/autotest_common.sh@817 -- # '[' -z 3964275 ']' 00:17:54.557 22:09:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:54.557 22:09:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:54.557 22:09:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:54.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:54.557 22:09:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:54.557 22:09:36 -- common/autotest_common.sh@10 -- # set +x 00:17:54.557 [2024-04-24 22:09:36.659857] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:54.557 [2024-04-24 22:09:36.659938] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3964275 ] 00:17:54.557 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.557 [2024-04-24 22:09:36.728660] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.815 [2024-04-24 22:09:36.848160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.815 22:09:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:54.815 22:09:36 -- common/autotest_common.sh@850 -- # return 0 00:17:54.815 22:09:36 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LsrurwB9IF 00:17:55.073 22:09:37 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:55.638 [2024-04-24 22:09:37.816427] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:55.638 nvme0n1 00:17:55.896 22:09:37 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:55.896 Running I/O for 1 seconds... 00:17:57.269 00:17:57.269 Latency(us) 00:17:57.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.269 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:57.269 Verification LBA range: start 0x0 length 0x2000 00:17:57.269 nvme0n1 : 1.04 2662.00 10.40 0.00 0.00 47262.01 6553.60 75730.49 00:17:57.269 =================================================================================================================== 00:17:57.269 Total : 2662.00 10.40 0.00 0.00 47262.01 6553.60 75730.49 00:17:57.269 0 00:17:57.269 22:09:39 -- target/tls.sh@234 -- # killprocess 3964275 00:17:57.269 22:09:39 -- common/autotest_common.sh@936 -- # '[' -z 3964275 ']' 00:17:57.269 22:09:39 -- common/autotest_common.sh@940 -- # kill -0 3964275 00:17:57.269 22:09:39 -- common/autotest_common.sh@941 -- # uname 00:17:57.269 22:09:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:57.269 22:09:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3964275 00:17:57.269 22:09:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:57.269 22:09:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:57.269 22:09:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3964275' 00:17:57.269 killing process with pid 3964275 00:17:57.269 22:09:39 -- common/autotest_common.sh@955 -- # kill 3964275 00:17:57.269 Received shutdown signal, test time was about 1.000000 seconds 00:17:57.269 00:17:57.269 Latency(us) 00:17:57.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.269 =================================================================================================================== 00:17:57.269 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:57.269 22:09:39 -- common/autotest_common.sh@960 -- # wait 3964275 00:17:57.269 22:09:39 -- target/tls.sh@235 -- # killprocess 3963991 00:17:57.269 22:09:39 -- common/autotest_common.sh@936 -- # '[' -z 3963991 ']' 00:17:57.269 22:09:39 -- common/autotest_common.sh@940 -- # kill -0 3963991 00:17:57.269 22:09:39 -- common/autotest_common.sh@941 -- # uname 00:17:57.269 22:09:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:57.269 22:09:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3963991 00:17:57.527 22:09:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:57.527 22:09:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:57.527 22:09:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3963991' 00:17:57.527 killing process with pid 3963991 00:17:57.527 22:09:39 -- common/autotest_common.sh@955 -- # kill 3963991 00:17:57.527 [2024-04-24 22:09:39.533000] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:57.527 [2024-04-24 22:09:39.533059] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:57.527 22:09:39 -- common/autotest_common.sh@960 -- # wait 3963991 00:17:57.784 22:09:39 -- target/tls.sh@238 -- # nvmfappstart 00:17:57.784 22:09:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:57.784 22:09:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:57.784 22:09:39 -- common/autotest_common.sh@10 -- # set +x 00:17:57.784 22:09:39 -- nvmf/common.sh@470 -- # nvmfpid=3964677 00:17:57.784 22:09:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:57.784 22:09:39 -- nvmf/common.sh@471 -- # waitforlisten 3964677 00:17:57.784 22:09:39 -- common/autotest_common.sh@817 -- # '[' -z 3964677 ']' 00:17:57.784 22:09:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.784 22:09:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:57.784 22:09:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.784 22:09:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:57.784 22:09:39 -- common/autotest_common.sh@10 -- # set +x 00:17:57.784 [2024-04-24 22:09:39.896778] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:57.785 [2024-04-24 22:09:39.896883] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.785 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.785 [2024-04-24 22:09:39.975366] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.043 [2024-04-24 22:09:40.101595] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.043 [2024-04-24 22:09:40.101675] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.043 [2024-04-24 22:09:40.101691] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.043 [2024-04-24 22:09:40.101707] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.043 [2024-04-24 22:09:40.101719] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.043 [2024-04-24 22:09:40.101759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.043 22:09:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:58.043 22:09:40 -- common/autotest_common.sh@850 -- # return 0 00:17:58.043 22:09:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:58.043 22:09:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:58.043 22:09:40 -- common/autotest_common.sh@10 -- # set +x 00:17:58.043 22:09:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.043 22:09:40 -- target/tls.sh@239 -- # rpc_cmd 00:17:58.043 22:09:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.043 22:09:40 -- common/autotest_common.sh@10 -- # set +x 00:17:58.043 [2024-04-24 22:09:40.257490] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:58.043 malloc0 00:17:58.043 [2024-04-24 22:09:40.290157] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:58.043 [2024-04-24 22:09:40.290260] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:58.043 [2024-04-24 22:09:40.290537] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.300 22:09:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.300 22:09:40 -- target/tls.sh@252 -- # bdevperf_pid=3964705 00:17:58.300 22:09:40 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:58.300 22:09:40 -- target/tls.sh@254 -- # waitforlisten 3964705 /var/tmp/bdevperf.sock 00:17:58.300 22:09:40 -- common/autotest_common.sh@817 -- # '[' -z 3964705 ']' 00:17:58.300 22:09:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:58.300 22:09:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:58.300 22:09:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:58.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:58.300 22:09:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:58.300 22:09:40 -- common/autotest_common.sh@10 -- # set +x 00:17:58.300 [2024-04-24 22:09:40.363392] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:58.300 [2024-04-24 22:09:40.363480] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3964705 ] 00:17:58.300 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.300 [2024-04-24 22:09:40.431491] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.300 [2024-04-24 22:09:40.551136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.558 22:09:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:58.558 22:09:40 -- common/autotest_common.sh@850 -- # return 0 00:17:58.558 22:09:40 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LsrurwB9IF 00:17:58.815 22:09:40 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:59.420 [2024-04-24 22:09:41.521080] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:59.420 nvme0n1 00:17:59.420 22:09:41 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:59.678 Running I/O for 1 seconds... 00:18:00.611 00:18:00.611 Latency(us) 00:18:00.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.611 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:00.611 Verification LBA range: start 0x0 length 0x2000 00:18:00.611 nvme0n1 : 1.04 2630.17 10.27 0.00 0.00 47756.98 11408.12 57865.86 00:18:00.611 =================================================================================================================== 00:18:00.611 Total : 2630.17 10.27 0.00 0.00 47756.98 11408.12 57865.86 00:18:00.611 0 00:18:00.611 22:09:42 -- target/tls.sh@263 -- # rpc_cmd save_config 00:18:00.611 22:09:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:00.611 22:09:42 -- common/autotest_common.sh@10 -- # set +x 00:18:00.870 22:09:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:00.870 22:09:42 -- target/tls.sh@263 -- # tgtcfg='{ 00:18:00.870 "subsystems": [ 00:18:00.870 { 00:18:00.870 "subsystem": "keyring", 00:18:00.870 "config": [ 00:18:00.870 { 00:18:00.870 "method": "keyring_file_add_key", 00:18:00.870 "params": { 00:18:00.870 "name": "key0", 00:18:00.870 "path": "/tmp/tmp.LsrurwB9IF" 00:18:00.870 } 00:18:00.870 } 00:18:00.870 ] 00:18:00.870 }, 00:18:00.870 { 00:18:00.870 "subsystem": "iobuf", 00:18:00.870 "config": [ 00:18:00.870 { 00:18:00.870 "method": "iobuf_set_options", 00:18:00.870 "params": { 00:18:00.870 "small_pool_count": 8192, 00:18:00.870 "large_pool_count": 1024, 00:18:00.870 "small_bufsize": 8192, 00:18:00.870 "large_bufsize": 135168 00:18:00.870 } 00:18:00.870 } 00:18:00.870 ] 00:18:00.870 }, 00:18:00.870 { 00:18:00.870 "subsystem": "sock", 00:18:00.870 "config": [ 00:18:00.870 { 00:18:00.870 "method": "sock_impl_set_options", 00:18:00.870 "params": { 00:18:00.870 "impl_name": "posix", 00:18:00.870 "recv_buf_size": 2097152, 00:18:00.870 "send_buf_size": 2097152, 00:18:00.870 "enable_recv_pipe": true, 00:18:00.870 "enable_quickack": false, 00:18:00.870 "enable_placement_id": 0, 00:18:00.870 "enable_zerocopy_send_server": true, 00:18:00.870 "enable_zerocopy_send_client": false, 00:18:00.870 "zerocopy_threshold": 0, 00:18:00.870 "tls_version": 0, 00:18:00.870 "enable_ktls": false 00:18:00.870 } 00:18:00.870 }, 00:18:00.870 { 00:18:00.870 "method": "sock_impl_set_options", 00:18:00.870 "params": { 00:18:00.870 "impl_name": "ssl", 00:18:00.870 "recv_buf_size": 4096, 00:18:00.870 "send_buf_size": 4096, 00:18:00.870 "enable_recv_pipe": true, 00:18:00.870 "enable_quickack": false, 00:18:00.870 "enable_placement_id": 0, 00:18:00.870 "enable_zerocopy_send_server": true, 00:18:00.870 "enable_zerocopy_send_client": false, 00:18:00.870 "zerocopy_threshold": 0, 00:18:00.870 "tls_version": 0, 00:18:00.870 "enable_ktls": false 00:18:00.870 } 00:18:00.870 } 00:18:00.870 ] 00:18:00.870 }, 00:18:00.870 { 00:18:00.870 "subsystem": "vmd", 00:18:00.870 "config": [] 00:18:00.870 }, 00:18:00.870 { 00:18:00.870 "subsystem": "accel", 00:18:00.870 "config": [ 00:18:00.870 { 00:18:00.870 "method": "accel_set_options", 00:18:00.870 "params": { 00:18:00.870 "small_cache_size": 128, 00:18:00.870 "large_cache_size": 16, 00:18:00.870 "task_count": 2048, 00:18:00.870 "sequence_count": 2048, 00:18:00.870 "buf_count": 2048 00:18:00.870 } 00:18:00.870 } 00:18:00.870 ] 00:18:00.870 }, 00:18:00.870 { 00:18:00.870 "subsystem": "bdev", 00:18:00.870 "config": [ 00:18:00.870 { 00:18:00.870 "method": "bdev_set_options", 00:18:00.870 "params": { 00:18:00.870 "bdev_io_pool_size": 65535, 00:18:00.870 "bdev_io_cache_size": 256, 00:18:00.870 "bdev_auto_examine": true, 00:18:00.870 "iobuf_small_cache_size": 128, 00:18:00.870 "iobuf_large_cache_size": 16 00:18:00.870 } 00:18:00.870 }, 00:18:00.870 { 00:18:00.870 "method": "bdev_raid_set_options", 00:18:00.870 "params": { 00:18:00.870 "process_window_size_kb": 1024 00:18:00.870 } 00:18:00.870 }, 00:18:00.870 { 00:18:00.870 "method": "bdev_iscsi_set_options", 00:18:00.870 "params": { 00:18:00.870 "timeout_sec": 30 00:18:00.870 } 00:18:00.870 }, 00:18:00.870 { 00:18:00.870 "method": "bdev_nvme_set_options", 00:18:00.870 "params": { 00:18:00.870 "action_on_timeout": "none", 00:18:00.870 "timeout_us": 0, 00:18:00.870 "timeout_admin_us": 0, 00:18:00.870 "keep_alive_timeout_ms": 10000, 00:18:00.870 "arbitration_burst": 0, 00:18:00.870 "low_priority_weight": 0, 00:18:00.870 "medium_priority_weight": 0, 00:18:00.870 "high_priority_weight": 0, 00:18:00.870 "nvme_adminq_poll_period_us": 10000, 00:18:00.870 "nvme_ioq_poll_period_us": 0, 00:18:00.870 "io_queue_requests": 0, 00:18:00.870 "delay_cmd_submit": true, 00:18:00.870 "transport_retry_count": 4, 00:18:00.870 "bdev_retry_count": 3, 00:18:00.870 "transport_ack_timeout": 0, 00:18:00.870 "ctrlr_loss_timeout_sec": 0, 00:18:00.870 "reconnect_delay_sec": 0, 00:18:00.870 "fast_io_fail_timeout_sec": 0, 00:18:00.870 "disable_auto_failback": false, 00:18:00.870 "generate_uuids": false, 00:18:00.870 "transport_tos": 0, 00:18:00.870 "nvme_error_stat": false, 00:18:00.870 "rdma_srq_size": 0, 00:18:00.870 "io_path_stat": false, 00:18:00.870 "allow_accel_sequence": false, 00:18:00.870 "rdma_max_cq_size": 0, 00:18:00.870 "rdma_cm_event_timeout_ms": 0, 00:18:00.870 "dhchap_digests": [ 00:18:00.870 "sha256", 00:18:00.870 "sha384", 00:18:00.870 "sha512" 00:18:00.870 ], 00:18:00.870 "dhchap_dhgroups": [ 00:18:00.870 "null", 00:18:00.870 "ffdhe2048", 00:18:00.870 "ffdhe3072", 00:18:00.870 "ffdhe4096", 00:18:00.870 "ffdhe6144", 00:18:00.870 "ffdhe8192" 00:18:00.870 ] 00:18:00.870 } 00:18:00.870 }, 00:18:00.870 { 00:18:00.870 "method": "bdev_nvme_set_hotplug", 00:18:00.870 "params": { 00:18:00.870 "period_us": 100000, 00:18:00.870 "enable": false 00:18:00.870 } 00:18:00.870 }, 00:18:00.870 { 00:18:00.870 "method": "bdev_malloc_create", 00:18:00.870 "params": { 00:18:00.870 "name": "malloc0", 00:18:00.870 "num_blocks": 8192, 00:18:00.870 "block_size": 4096, 00:18:00.870 "physical_block_size": 4096, 00:18:00.870 "uuid": "5689a444-1d7b-4fc1-a61a-67bf8abcd0ee", 00:18:00.870 "optimal_io_boundary": 0 00:18:00.870 } 00:18:00.870 }, 00:18:00.870 { 00:18:00.870 "method": "bdev_wait_for_examine" 00:18:00.870 } 00:18:00.870 ] 00:18:00.870 }, 00:18:00.870 { 00:18:00.870 "subsystem": "nbd", 00:18:00.870 "config": [] 00:18:00.870 }, 00:18:00.870 { 00:18:00.870 "subsystem": "scheduler", 00:18:00.870 "config": [ 00:18:00.870 { 00:18:00.870 "method": "framework_set_scheduler", 00:18:00.870 "params": { 00:18:00.870 "name": "static" 00:18:00.870 } 00:18:00.870 } 00:18:00.870 ] 00:18:00.870 }, 00:18:00.870 { 00:18:00.870 "subsystem": "nvmf", 00:18:00.870 "config": [ 00:18:00.870 { 00:18:00.870 "method": "nvmf_set_config", 00:18:00.870 "params": { 00:18:00.870 "discovery_filter": "match_any", 00:18:00.870 "admin_cmd_passthru": { 00:18:00.870 "identify_ctrlr": false 00:18:00.870 } 00:18:00.870 } 00:18:00.870 }, 00:18:00.870 { 00:18:00.870 "method": "nvmf_set_max_subsystems", 00:18:00.870 "params": { 00:18:00.870 "max_subsystems": 1024 00:18:00.870 } 00:18:00.870 }, 00:18:00.870 { 00:18:00.870 "method": "nvmf_set_crdt", 00:18:00.870 "params": { 00:18:00.870 "crdt1": 0, 00:18:00.870 "crdt2": 0, 00:18:00.870 "crdt3": 0 00:18:00.870 } 00:18:00.870 }, 00:18:00.870 { 00:18:00.870 "method": "nvmf_create_transport", 00:18:00.870 "params": { 00:18:00.870 "trtype": "TCP", 00:18:00.870 "max_queue_depth": 128, 00:18:00.870 "max_io_qpairs_per_ctrlr": 127, 00:18:00.870 "in_capsule_data_size": 4096, 00:18:00.870 "max_io_size": 131072, 00:18:00.870 "io_unit_size": 131072, 00:18:00.870 "max_aq_depth": 128, 00:18:00.870 "num_shared_buffers": 511, 00:18:00.870 "buf_cache_size": 4294967295, 00:18:00.870 "dif_insert_or_strip": false, 00:18:00.870 "zcopy": false, 00:18:00.870 "c2h_success": false, 00:18:00.870 "sock_priority": 0, 00:18:00.870 "abort_timeout_sec": 1, 00:18:00.870 "ack_timeout": 0 00:18:00.870 } 00:18:00.870 }, 00:18:00.870 { 00:18:00.870 "method": "nvmf_create_subsystem", 00:18:00.870 "params": { 00:18:00.870 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.870 "allow_any_host": false, 00:18:00.870 "serial_number": "00000000000000000000", 00:18:00.870 "model_number": "SPDK bdev Controller", 00:18:00.870 "max_namespaces": 32, 00:18:00.870 "min_cntlid": 1, 00:18:00.870 "max_cntlid": 65519, 00:18:00.870 "ana_reporting": false 00:18:00.870 } 00:18:00.870 }, 00:18:00.870 { 00:18:00.870 "method": "nvmf_subsystem_add_host", 00:18:00.870 "params": { 00:18:00.870 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.870 "host": "nqn.2016-06.io.spdk:host1", 00:18:00.870 "psk": "key0" 00:18:00.870 } 00:18:00.870 }, 00:18:00.870 { 00:18:00.870 "method": "nvmf_subsystem_add_ns", 00:18:00.870 "params": { 00:18:00.870 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.870 "namespace": { 00:18:00.870 "nsid": 1, 00:18:00.870 "bdev_name": "malloc0", 00:18:00.870 "nguid": "5689A4441D7B4FC1A61A67BF8ABCD0EE", 00:18:00.870 "uuid": "5689a444-1d7b-4fc1-a61a-67bf8abcd0ee", 00:18:00.871 "no_auto_visible": false 00:18:00.871 } 00:18:00.871 } 00:18:00.871 }, 00:18:00.871 { 00:18:00.871 "method": "nvmf_subsystem_add_listener", 00:18:00.871 "params": { 00:18:00.871 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.871 "listen_address": { 00:18:00.871 "trtype": "TCP", 00:18:00.871 "adrfam": "IPv4", 00:18:00.871 "traddr": "10.0.0.2", 00:18:00.871 "trsvcid": "4420" 00:18:00.871 }, 00:18:00.871 "secure_channel": true 00:18:00.871 } 00:18:00.871 } 00:18:00.871 ] 00:18:00.871 } 00:18:00.871 ] 00:18:00.871 }' 00:18:00.871 22:09:42 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:01.129 22:09:43 -- target/tls.sh@264 -- # bperfcfg='{ 00:18:01.129 "subsystems": [ 00:18:01.129 { 00:18:01.129 "subsystem": "keyring", 00:18:01.129 "config": [ 00:18:01.129 { 00:18:01.129 "method": "keyring_file_add_key", 00:18:01.129 "params": { 00:18:01.129 "name": "key0", 00:18:01.129 "path": "/tmp/tmp.LsrurwB9IF" 00:18:01.129 } 00:18:01.129 } 00:18:01.129 ] 00:18:01.129 }, 00:18:01.129 { 00:18:01.129 "subsystem": "iobuf", 00:18:01.129 "config": [ 00:18:01.129 { 00:18:01.129 "method": "iobuf_set_options", 00:18:01.129 "params": { 00:18:01.129 "small_pool_count": 8192, 00:18:01.129 "large_pool_count": 1024, 00:18:01.129 "small_bufsize": 8192, 00:18:01.129 "large_bufsize": 135168 00:18:01.129 } 00:18:01.129 } 00:18:01.129 ] 00:18:01.129 }, 00:18:01.129 { 00:18:01.129 "subsystem": "sock", 00:18:01.129 "config": [ 00:18:01.129 { 00:18:01.129 "method": "sock_impl_set_options", 00:18:01.129 "params": { 00:18:01.129 "impl_name": "posix", 00:18:01.129 "recv_buf_size": 2097152, 00:18:01.129 "send_buf_size": 2097152, 00:18:01.129 "enable_recv_pipe": true, 00:18:01.129 "enable_quickack": false, 00:18:01.129 "enable_placement_id": 0, 00:18:01.129 "enable_zerocopy_send_server": true, 00:18:01.129 "enable_zerocopy_send_client": false, 00:18:01.129 "zerocopy_threshold": 0, 00:18:01.129 "tls_version": 0, 00:18:01.129 "enable_ktls": false 00:18:01.129 } 00:18:01.129 }, 00:18:01.129 { 00:18:01.129 "method": "sock_impl_set_options", 00:18:01.129 "params": { 00:18:01.129 "impl_name": "ssl", 00:18:01.129 "recv_buf_size": 4096, 00:18:01.129 "send_buf_size": 4096, 00:18:01.129 "enable_recv_pipe": true, 00:18:01.129 "enable_quickack": false, 00:18:01.129 "enable_placement_id": 0, 00:18:01.129 "enable_zerocopy_send_server": true, 00:18:01.129 "enable_zerocopy_send_client": false, 00:18:01.129 "zerocopy_threshold": 0, 00:18:01.129 "tls_version": 0, 00:18:01.129 "enable_ktls": false 00:18:01.129 } 00:18:01.129 } 00:18:01.129 ] 00:18:01.129 }, 00:18:01.129 { 00:18:01.129 "subsystem": "vmd", 00:18:01.129 "config": [] 00:18:01.129 }, 00:18:01.129 { 00:18:01.129 "subsystem": "accel", 00:18:01.129 "config": [ 00:18:01.129 { 00:18:01.129 "method": "accel_set_options", 00:18:01.129 "params": { 00:18:01.129 "small_cache_size": 128, 00:18:01.129 "large_cache_size": 16, 00:18:01.129 "task_count": 2048, 00:18:01.129 "sequence_count": 2048, 00:18:01.129 "buf_count": 2048 00:18:01.129 } 00:18:01.129 } 00:18:01.129 ] 00:18:01.129 }, 00:18:01.129 { 00:18:01.129 "subsystem": "bdev", 00:18:01.129 "config": [ 00:18:01.129 { 00:18:01.129 "method": "bdev_set_options", 00:18:01.129 "params": { 00:18:01.129 "bdev_io_pool_size": 65535, 00:18:01.129 "bdev_io_cache_size": 256, 00:18:01.129 "bdev_auto_examine": true, 00:18:01.129 "iobuf_small_cache_size": 128, 00:18:01.129 "iobuf_large_cache_size": 16 00:18:01.129 } 00:18:01.129 }, 00:18:01.129 { 00:18:01.129 "method": "bdev_raid_set_options", 00:18:01.129 "params": { 00:18:01.129 "process_window_size_kb": 1024 00:18:01.129 } 00:18:01.129 }, 00:18:01.129 { 00:18:01.129 "method": "bdev_iscsi_set_options", 00:18:01.129 "params": { 00:18:01.129 "timeout_sec": 30 00:18:01.129 } 00:18:01.129 }, 00:18:01.129 { 00:18:01.129 "method": "bdev_nvme_set_options", 00:18:01.129 "params": { 00:18:01.129 "action_on_timeout": "none", 00:18:01.129 "timeout_us": 0, 00:18:01.129 "timeout_admin_us": 0, 00:18:01.129 "keep_alive_timeout_ms": 10000, 00:18:01.129 "arbitration_burst": 0, 00:18:01.129 "low_priority_weight": 0, 00:18:01.129 "medium_priority_weight": 0, 00:18:01.129 "high_priority_weight": 0, 00:18:01.129 "nvme_adminq_poll_period_us": 10000, 00:18:01.129 "nvme_ioq_poll_period_us": 0, 00:18:01.129 "io_queue_requests": 512, 00:18:01.129 "delay_cmd_submit": true, 00:18:01.129 "transport_retry_count": 4, 00:18:01.129 "bdev_retry_count": 3, 00:18:01.129 "transport_ack_timeout": 0, 00:18:01.129 "ctrlr_loss_timeout_sec": 0, 00:18:01.129 "reconnect_delay_sec": 0, 00:18:01.129 "fast_io_fail_timeout_sec": 0, 00:18:01.129 "disable_auto_failback": false, 00:18:01.129 "generate_uuids": false, 00:18:01.129 "transport_tos": 0, 00:18:01.129 "nvme_error_stat": false, 00:18:01.129 "rdma_srq_size": 0, 00:18:01.129 "io_path_stat": false, 00:18:01.129 "allow_accel_sequence": false, 00:18:01.129 "rdma_max_cq_size": 0, 00:18:01.129 "rdma_cm_event_timeout_ms": 0, 00:18:01.129 "dhchap_digests": [ 00:18:01.129 "sha256", 00:18:01.129 "sha384", 00:18:01.129 "sha512" 00:18:01.129 ], 00:18:01.129 "dhchap_dhgroups": [ 00:18:01.129 "null", 00:18:01.129 "ffdhe2048", 00:18:01.129 "ffdhe3072", 00:18:01.129 "ffdhe4096", 00:18:01.129 "ffdhe6144", 00:18:01.129 "ffdhe8192" 00:18:01.129 ] 00:18:01.129 } 00:18:01.129 }, 00:18:01.129 { 00:18:01.129 "method": "bdev_nvme_attach_controller", 00:18:01.129 "params": { 00:18:01.129 "name": "nvme0", 00:18:01.129 "trtype": "TCP", 00:18:01.129 "adrfam": "IPv4", 00:18:01.129 "traddr": "10.0.0.2", 00:18:01.129 "trsvcid": "4420", 00:18:01.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.129 "prchk_reftag": false, 00:18:01.129 "prchk_guard": false, 00:18:01.129 "ctrlr_loss_timeout_sec": 0, 00:18:01.129 "reconnect_delay_sec": 0, 00:18:01.129 "fast_io_fail_timeout_sec": 0, 00:18:01.129 "psk": "key0", 00:18:01.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:01.129 "hdgst": false, 00:18:01.129 "ddgst": false 00:18:01.129 } 00:18:01.129 }, 00:18:01.129 { 00:18:01.129 "method": "bdev_nvme_set_hotplug", 00:18:01.129 "params": { 00:18:01.129 "period_us": 100000, 00:18:01.129 "enable": false 00:18:01.129 } 00:18:01.129 }, 00:18:01.129 { 00:18:01.129 "method": "bdev_enable_histogram", 00:18:01.129 "params": { 00:18:01.129 "name": "nvme0n1", 00:18:01.129 "enable": true 00:18:01.129 } 00:18:01.129 }, 00:18:01.129 { 00:18:01.129 "method": "bdev_wait_for_examine" 00:18:01.129 } 00:18:01.129 ] 00:18:01.129 }, 00:18:01.129 { 00:18:01.129 "subsystem": "nbd", 00:18:01.129 "config": [] 00:18:01.130 } 00:18:01.130 ] 00:18:01.130 }' 00:18:01.130 22:09:43 -- target/tls.sh@266 -- # killprocess 3964705 00:18:01.130 22:09:43 -- common/autotest_common.sh@936 -- # '[' -z 3964705 ']' 00:18:01.130 22:09:43 -- common/autotest_common.sh@940 -- # kill -0 3964705 00:18:01.130 22:09:43 -- common/autotest_common.sh@941 -- # uname 00:18:01.130 22:09:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:01.130 22:09:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3964705 00:18:01.130 22:09:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:01.130 22:09:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:01.130 22:09:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3964705' 00:18:01.130 killing process with pid 3964705 00:18:01.130 22:09:43 -- common/autotest_common.sh@955 -- # kill 3964705 00:18:01.130 Received shutdown signal, test time was about 1.000000 seconds 00:18:01.130 00:18:01.130 Latency(us) 00:18:01.130 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.130 =================================================================================================================== 00:18:01.130 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:01.130 22:09:43 -- common/autotest_common.sh@960 -- # wait 3964705 00:18:01.697 22:09:43 -- target/tls.sh@267 -- # killprocess 3964677 00:18:01.697 22:09:43 -- common/autotest_common.sh@936 -- # '[' -z 3964677 ']' 00:18:01.697 22:09:43 -- common/autotest_common.sh@940 -- # kill -0 3964677 00:18:01.697 22:09:43 -- common/autotest_common.sh@941 -- # uname 00:18:01.697 22:09:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:01.697 22:09:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3964677 00:18:01.697 22:09:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:01.697 22:09:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:01.697 22:09:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3964677' 00:18:01.697 killing process with pid 3964677 00:18:01.697 22:09:43 -- common/autotest_common.sh@955 -- # kill 3964677 00:18:01.697 [2024-04-24 22:09:43.678882] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:01.697 22:09:43 -- common/autotest_common.sh@960 -- # wait 3964677 00:18:01.956 22:09:43 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:18:01.956 22:09:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:01.956 22:09:43 -- target/tls.sh@269 -- # echo '{ 00:18:01.956 "subsystems": [ 00:18:01.956 { 00:18:01.956 "subsystem": "keyring", 00:18:01.956 "config": [ 00:18:01.956 { 00:18:01.956 "method": "keyring_file_add_key", 00:18:01.956 "params": { 00:18:01.956 "name": "key0", 00:18:01.956 "path": "/tmp/tmp.LsrurwB9IF" 00:18:01.956 } 00:18:01.956 } 00:18:01.956 ] 00:18:01.956 }, 00:18:01.956 { 00:18:01.956 "subsystem": "iobuf", 00:18:01.956 "config": [ 00:18:01.956 { 00:18:01.956 "method": "iobuf_set_options", 00:18:01.956 "params": { 00:18:01.956 "small_pool_count": 8192, 00:18:01.956 "large_pool_count": 1024, 00:18:01.956 "small_bufsize": 8192, 00:18:01.956 "large_bufsize": 135168 00:18:01.956 } 00:18:01.956 } 00:18:01.956 ] 00:18:01.956 }, 00:18:01.956 { 00:18:01.956 "subsystem": "sock", 00:18:01.956 "config": [ 00:18:01.956 { 00:18:01.956 "method": "sock_impl_set_options", 00:18:01.956 "params": { 00:18:01.956 "impl_name": "posix", 00:18:01.956 "recv_buf_size": 2097152, 00:18:01.956 "send_buf_size": 2097152, 00:18:01.956 "enable_recv_pipe": true, 00:18:01.956 "enable_quickack": false, 00:18:01.956 "enable_placement_id": 0, 00:18:01.956 "enable_zerocopy_send_server": true, 00:18:01.956 "enable_zerocopy_send_client": false, 00:18:01.956 "zerocopy_threshold": 0, 00:18:01.956 "tls_version": 0, 00:18:01.956 "enable_ktls": false 00:18:01.956 } 00:18:01.956 }, 00:18:01.956 { 00:18:01.956 "method": "sock_impl_set_options", 00:18:01.956 "params": { 00:18:01.956 "impl_name": "ssl", 00:18:01.956 "recv_buf_size": 4096, 00:18:01.956 "send_buf_size": 4096, 00:18:01.956 "enable_recv_pipe": true, 00:18:01.956 "enable_quickack": false, 00:18:01.956 "enable_placement_id": 0, 00:18:01.956 "enable_zerocopy_send_server": true, 00:18:01.956 "enable_zerocopy_send_client": false, 00:18:01.956 "zerocopy_threshold": 0, 00:18:01.956 "tls_version": 0, 00:18:01.956 "enable_ktls": false 00:18:01.956 } 00:18:01.956 } 00:18:01.956 ] 00:18:01.956 }, 00:18:01.956 { 00:18:01.957 "subsystem": "vmd", 00:18:01.957 "config": [] 00:18:01.957 }, 00:18:01.957 { 00:18:01.957 "subsystem": "accel", 00:18:01.957 "config": [ 00:18:01.957 { 00:18:01.957 "method": "accel_set_options", 00:18:01.957 "params": { 00:18:01.957 "small_cache_size": 128, 00:18:01.957 "large_cache_size": 16, 00:18:01.957 "task_count": 2048, 00:18:01.957 "sequence_count": 2048, 00:18:01.957 "buf_count": 2048 00:18:01.957 } 00:18:01.957 } 00:18:01.957 ] 00:18:01.957 }, 00:18:01.957 { 00:18:01.957 "subsystem": "bdev", 00:18:01.957 "config": [ 00:18:01.957 { 00:18:01.957 "method": "bdev_set_options", 00:18:01.957 "params": { 00:18:01.957 "bdev_io_pool_size": 65535, 00:18:01.957 "bdev_io_cache_size": 256, 00:18:01.957 "bdev_auto_examine": true, 00:18:01.957 "iobuf_small_cache_size": 128, 00:18:01.957 "iobuf_large_cache_size": 16 00:18:01.957 } 00:18:01.957 }, 00:18:01.957 { 00:18:01.957 "method": "bdev_raid_set_options", 00:18:01.957 "params": { 00:18:01.957 "process_window_size_kb": 1024 00:18:01.957 } 00:18:01.957 }, 00:18:01.957 { 00:18:01.957 "method": "bdev_iscsi_set_options", 00:18:01.957 "params": { 00:18:01.957 "timeout_sec": 30 00:18:01.957 } 00:18:01.957 }, 00:18:01.957 { 00:18:01.957 "method": "bdev_nvme_set_options", 00:18:01.957 "params": { 00:18:01.957 "action_on_timeout": "none", 00:18:01.957 "timeout_us": 0, 00:18:01.957 "timeout_admin_us": 0, 00:18:01.957 "keep_alive_timeout_ms": 10000, 00:18:01.957 "arbitration_burst": 0, 00:18:01.957 "low_priority_weight": 0, 00:18:01.957 "medium_priority_weight": 0, 00:18:01.957 "high_priority_weight": 0, 00:18:01.957 "nvme_adminq_poll_period_us": 10000, 00:18:01.957 "nvme_ioq_poll_period_us": 0, 00:18:01.957 "io_queue_requests": 0, 00:18:01.957 "delay_cmd_submit": true, 00:18:01.957 "transport_retry_count": 4, 00:18:01.957 "bdev_retry_count": 3, 00:18:01.957 "transport_ack_timeout": 0, 00:18:01.957 "ctrlr_loss_timeout_sec": 0, 00:18:01.957 "reconnect_delay_sec": 0, 00:18:01.957 "fast_io_fail_timeout_sec": 0, 00:18:01.957 "disable_auto_failback": false, 00:18:01.957 "generate_uuids": false, 00:18:01.957 "transport_tos": 0, 00:18:01.957 "nvme_error_stat": false, 00:18:01.957 "rdma_srq_size": 0, 00:18:01.957 "io_path_stat": false, 00:18:01.957 "allow_accel_sequence": false, 00:18:01.957 "rdma_max_cq_size": 0, 00:18:01.957 "rdma_cm_event_timeout_ms": 0, 00:18:01.957 "dhchap_digests": [ 00:18:01.957 "sha256", 00:18:01.957 "sha384", 00:18:01.957 "sha512" 00:18:01.957 ], 00:18:01.957 "dhchap_dhgroups": [ 00:18:01.957 "null", 00:18:01.957 "ffdhe2048", 00:18:01.957 "ffdhe3072", 00:18:01.957 "ffdhe4096", 00:18:01.957 "ffdhe6144", 00:18:01.957 "ffdhe8192" 00:18:01.957 ] 00:18:01.957 } 00:18:01.957 }, 00:18:01.957 { 00:18:01.957 "method": "bdev_nvme_set_hotplug", 00:18:01.957 "params": { 00:18:01.957 "period_us": 100000, 00:18:01.957 "enable": false 00:18:01.957 } 00:18:01.957 }, 00:18:01.957 { 00:18:01.957 "method": "bdev_malloc_create", 00:18:01.957 "params": { 00:18:01.957 "name": "malloc0", 00:18:01.957 "num_blocks": 8192, 00:18:01.957 "block_size": 4096, 00:18:01.957 "physical_block_size": 4096, 00:18:01.957 "uuid": "5689a444-1d7b-4fc1-a61a-67bf8abcd0ee", 00:18:01.957 "optimal_io_boundary": 0 00:18:01.957 } 00:18:01.957 }, 00:18:01.957 { 00:18:01.957 "method": "bdev_wait_for_examine" 00:18:01.957 } 00:18:01.957 ] 00:18:01.957 }, 00:18:01.957 { 00:18:01.957 "subsystem": "nbd", 00:18:01.957 "config": [] 00:18:01.957 }, 00:18:01.957 { 00:18:01.957 "subsystem": "scheduler", 00:18:01.957 "config": [ 00:18:01.957 { 00:18:01.957 "method": "framework_set_scheduler", 00:18:01.957 "params": { 00:18:01.957 "name": "static" 00:18:01.957 } 00:18:01.957 } 00:18:01.957 ] 00:18:01.957 }, 00:18:01.957 { 00:18:01.957 "subsystem": "nvmf", 00:18:01.957 "config": [ 00:18:01.957 { 00:18:01.957 "method": "nvmf_set_config", 00:18:01.957 "params": { 00:18:01.957 "discovery_filter": "match_any", 00:18:01.957 "admin_cmd_passthru": { 00:18:01.957 "identify_ctrlr": false 00:18:01.957 } 00:18:01.957 } 00:18:01.957 }, 00:18:01.957 { 00:18:01.957 "method": "nvmf_set_max_subsystems", 00:18:01.957 "params": { 00:18:01.957 "max_subsystems": 1024 00:18:01.957 } 00:18:01.957 }, 00:18:01.957 { 00:18:01.957 "method": "nvmf_set_crdt", 00:18:01.957 "params": { 00:18:01.957 "crdt1": 0, 00:18:01.957 "crdt2": 0, 00:18:01.957 "crdt3": 0 00:18:01.957 } 00:18:01.957 }, 00:18:01.957 { 00:18:01.957 "method": "nvmf_create_transport", 00:18:01.957 "params": { 00:18:01.957 "trtype": "TCP", 00:18:01.957 "max_queue_depth": 128, 00:18:01.957 "max_io_qpairs_per_ctrlr": 127, 00:18:01.957 "in_capsule_data_size": 4096, 00:18:01.957 "max_io_size": 131072, 00:18:01.957 "io_unit_size": 131072, 00:18:01.957 "max_aq_depth": 128, 00:18:01.957 "num_shared_buffers": 511, 00:18:01.957 "buf_cache_size": 4294967295, 00:18:01.957 "dif_insert_or_strip": false, 00:18:01.957 "zcopy": false, 00:18:01.957 "c2h_success": false, 00:18:01.957 "sock_priority": 0, 00:18:01.957 "abort_timeout_sec": 1, 00:18:01.957 "ack_timeout": 0 00:18:01.957 } 00:18:01.957 }, 00:18:01.957 { 00:18:01.957 "method": "nvmf_create_subsystem", 00:18:01.957 "params": { 00:18:01.957 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.957 "allow_any_host": false, 00:18:01.957 "serial_number": "00000000000000000000", 00:18:01.957 "model_number": "SPDK bdev Controller", 00:18:01.957 "max_namespaces": 32, 00:18:01.957 "min_cntlid": 1, 00:18:01.957 "max_cntlid": 65519, 00:18:01.957 "ana_reporting": false 00:18:01.957 } 00:18:01.957 }, 00:18:01.957 { 00:18:01.957 "method": "nvmf_subsystem_add_host", 00:18:01.957 "params": { 00:18:01.957 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.957 "host": "nqn.2016-06.io.spdk:host1", 00:18:01.957 "psk": "key0" 00:18:01.957 } 00:18:01.957 }, 00:18:01.957 { 00:18:01.957 "method": "nvmf_subsystem_add_ns", 00:18:01.957 "params": { 00:18:01.957 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.957 "namespace": { 00:18:01.957 "nsid": 1, 00:18:01.957 "bdev_name": "malloc0", 00:18:01.957 "nguid": "5689A4441D7B4FC1A61A67BF8ABCD0EE", 00:18:01.957 "uuid": "5689a444-1d7b-4fc1-a61a-67bf8abcd0ee", 00:18:01.957 "no_auto_visible": false 00:18:01.957 } 00:18:01.957 } 00:18:01.957 }, 00:18:01.957 { 00:18:01.957 "method": "nvmf_subsystem_add_listener", 00:18:01.957 "params": { 00:18:01.957 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.957 "listen_address": { 00:18:01.957 "trtype": "TCP", 00:18:01.957 "adrfam": "IPv4", 00:18:01.957 "traddr": "10.0.0.2", 00:18:01.957 "trsvcid": "4420" 00:18:01.957 }, 00:18:01.957 "secure_channel": true 00:18:01.957 } 00:18:01.957 } 00:18:01.957 ] 00:18:01.957 } 00:18:01.957 ] 00:18:01.957 }' 00:18:01.957 22:09:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:01.957 22:09:43 -- common/autotest_common.sh@10 -- # set +x 00:18:01.957 22:09:43 -- nvmf/common.sh@470 -- # nvmfpid=3965123 00:18:01.957 22:09:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:01.957 22:09:43 -- nvmf/common.sh@471 -- # waitforlisten 3965123 00:18:01.957 22:09:43 -- common/autotest_common.sh@817 -- # '[' -z 3965123 ']' 00:18:01.957 22:09:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.957 22:09:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:01.957 22:09:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.957 22:09:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:01.957 22:09:43 -- common/autotest_common.sh@10 -- # set +x 00:18:01.957 [2024-04-24 22:09:44.043838] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:18:01.957 [2024-04-24 22:09:44.043945] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.957 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.957 [2024-04-24 22:09:44.121591] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.216 [2024-04-24 22:09:44.241362] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.216 [2024-04-24 22:09:44.241435] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.216 [2024-04-24 22:09:44.241453] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:02.216 [2024-04-24 22:09:44.241466] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:02.216 [2024-04-24 22:09:44.241479] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.216 [2024-04-24 22:09:44.241583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.473 [2024-04-24 22:09:44.483461] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:02.473 [2024-04-24 22:09:44.515432] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:02.473 [2024-04-24 22:09:44.515525] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:02.473 [2024-04-24 22:09:44.526623] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.039 22:09:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:03.039 22:09:45 -- common/autotest_common.sh@850 -- # return 0 00:18:03.039 22:09:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:03.039 22:09:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:03.039 22:09:45 -- common/autotest_common.sh@10 -- # set +x 00:18:03.039 22:09:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.039 22:09:45 -- target/tls.sh@272 -- # bdevperf_pid=3965273 00:18:03.039 22:09:45 -- target/tls.sh@273 -- # waitforlisten 3965273 /var/tmp/bdevperf.sock 00:18:03.039 22:09:45 -- common/autotest_common.sh@817 -- # '[' -z 3965273 ']' 00:18:03.039 22:09:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:03.039 22:09:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:03.039 22:09:45 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:03.039 22:09:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:03.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:03.039 22:09:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:03.039 22:09:45 -- target/tls.sh@270 -- # echo '{ 00:18:03.039 "subsystems": [ 00:18:03.039 { 00:18:03.039 "subsystem": "keyring", 00:18:03.039 "config": [ 00:18:03.039 { 00:18:03.039 "method": "keyring_file_add_key", 00:18:03.039 "params": { 00:18:03.039 "name": "key0", 00:18:03.039 "path": "/tmp/tmp.LsrurwB9IF" 00:18:03.039 } 00:18:03.039 } 00:18:03.039 ] 00:18:03.039 }, 00:18:03.039 { 00:18:03.039 "subsystem": "iobuf", 00:18:03.039 "config": [ 00:18:03.039 { 00:18:03.039 "method": "iobuf_set_options", 00:18:03.039 "params": { 00:18:03.039 "small_pool_count": 8192, 00:18:03.039 "large_pool_count": 1024, 00:18:03.039 "small_bufsize": 8192, 00:18:03.039 "large_bufsize": 135168 00:18:03.039 } 00:18:03.039 } 00:18:03.039 ] 00:18:03.039 }, 00:18:03.039 { 00:18:03.039 "subsystem": "sock", 00:18:03.039 "config": [ 00:18:03.039 { 00:18:03.039 "method": "sock_impl_set_options", 00:18:03.040 "params": { 00:18:03.040 "impl_name": "posix", 00:18:03.040 "recv_buf_size": 2097152, 00:18:03.040 "send_buf_size": 2097152, 00:18:03.040 "enable_recv_pipe": true, 00:18:03.040 "enable_quickack": false, 00:18:03.040 "enable_placement_id": 0, 00:18:03.040 "enable_zerocopy_send_server": true, 00:18:03.040 "enable_zerocopy_send_client": false, 00:18:03.040 "zerocopy_threshold": 0, 00:18:03.040 "tls_version": 0, 00:18:03.040 "enable_ktls": false 00:18:03.040 } 00:18:03.040 }, 00:18:03.040 { 00:18:03.040 "method": "sock_impl_set_options", 00:18:03.040 "params": { 00:18:03.040 "impl_name": "ssl", 00:18:03.040 "recv_buf_size": 4096, 00:18:03.040 "send_buf_size": 4096, 00:18:03.040 "enable_recv_pipe": true, 00:18:03.040 "enable_quickack": false, 00:18:03.040 "enable_placement_id": 0, 00:18:03.040 "enable_zerocopy_send_server": true, 00:18:03.040 "enable_zerocopy_send_client": false, 00:18:03.040 "zerocopy_threshold": 0, 00:18:03.040 "tls_version": 0, 00:18:03.040 "enable_ktls": false 00:18:03.040 } 00:18:03.040 } 00:18:03.040 ] 00:18:03.040 }, 00:18:03.040 { 00:18:03.040 "subsystem": "vmd", 00:18:03.040 "config": [] 00:18:03.040 }, 00:18:03.040 { 00:18:03.040 "subsystem": "accel", 00:18:03.040 "config": [ 00:18:03.040 { 00:18:03.040 "method": "accel_set_options", 00:18:03.040 "params": { 00:18:03.040 "small_cache_size": 128, 00:18:03.040 "large_cache_size": 16, 00:18:03.040 "task_count": 2048, 00:18:03.040 "sequence_count": 2048, 00:18:03.040 "buf_count": 2048 00:18:03.040 } 00:18:03.040 } 00:18:03.040 ] 00:18:03.040 }, 00:18:03.040 { 00:18:03.040 "subsystem": "bdev", 00:18:03.040 "config": [ 00:18:03.040 { 00:18:03.040 "method": "bdev_set_options", 00:18:03.040 "params": { 00:18:03.040 "bdev_io_pool_size": 65535, 00:18:03.040 "bdev_io_cache_size": 256, 00:18:03.040 "bdev_auto_examine": true, 00:18:03.040 "iobuf_small_cache_size": 128, 00:18:03.040 "iobuf_large_cache_size": 16 00:18:03.040 } 00:18:03.040 }, 00:18:03.040 { 00:18:03.040 "method": "bdev_raid_set_options", 00:18:03.040 "params": { 00:18:03.040 "process_window_size_kb": 1024 00:18:03.040 } 00:18:03.040 }, 00:18:03.040 { 00:18:03.040 "method": "bdev_iscsi_set_options", 00:18:03.040 "params": { 00:18:03.040 "timeout_sec": 30 00:18:03.040 } 00:18:03.040 }, 00:18:03.040 { 00:18:03.040 "method": "bdev_nvme_set_options", 00:18:03.040 "params": { 00:18:03.040 "action_on_timeout": "none", 00:18:03.040 "timeout_us": 0, 00:18:03.040 "timeout_admin_us": 0, 00:18:03.040 "keep_alive_timeout_ms": 10000, 00:18:03.040 "arbitration_burst": 0, 00:18:03.040 "low_priority_weight": 0, 00:18:03.040 "medium_priority_weight": 0, 00:18:03.040 "high_priority_weight": 0, 00:18:03.040 "nvme_adminq_poll_period_us": 10000, 00:18:03.040 "nvme_ioq_poll_period_us": 0, 00:18:03.040 "io_queue_requests": 512, 00:18:03.040 "delay_cmd_submit": true, 00:18:03.040 "transport_retry_count": 4, 00:18:03.040 "bdev_retry_count": 3, 00:18:03.040 "transport_ack_timeout": 0, 00:18:03.040 "ctrlr_loss_timeout_sec": 0, 00:18:03.040 "reconnect_delay_sec": 0, 00:18:03.040 "fast_io_fail_timeout_sec": 0, 00:18:03.040 "disable_auto_failback": false, 00:18:03.040 "generate_uuids": false, 00:18:03.040 "transport_tos": 0, 00:18:03.040 "nvme_error_stat": false, 00:18:03.040 "rdma_srq_size": 0, 00:18:03.040 "io_path_stat": false, 00:18:03.040 "allow_accel_sequence": false, 00:18:03.040 "rdma_max_cq_size": 0, 00:18:03.040 "rdma_cm_event_timeout_ms": 0, 00:18:03.040 "dhchap_digests": [ 00:18:03.040 "sha256", 00:18:03.040 "sha384", 00:18:03.040 "sha512" 00:18:03.040 ], 00:18:03.040 "dhchap_dhgroups": [ 00:18:03.040 "null", 00:18:03.040 "ffdhe2048", 00:18:03.040 "ffdhe3072", 00:18:03.040 "ffdhe4096", 00:18:03.040 "ffdhe6144", 00:18:03.040 "ffdhe8192" 00:18:03.040 ] 00:18:03.040 } 00:18:03.040 }, 00:18:03.040 { 00:18:03.040 "method": "bdev_nvme_attach_controller", 00:18:03.040 "params": { 00:18:03.040 "name": "nvme0", 00:18:03.040 "trtype": "TCP", 00:18:03.040 "adrfam": "IPv4", 00:18:03.040 "traddr": "10.0.0.2", 00:18:03.040 "trsvcid": "4420", 00:18:03.040 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.040 "prchk_reftag": false, 00:18:03.040 "prchk_guard": false, 00:18:03.040 "ctrlr_loss_timeout_sec": 0, 00:18:03.040 "reconnect_delay_sec": 0, 00:18:03.040 "fast_io_fail_timeout_sec": 0, 00:18:03.040 "psk": "key0", 00:18:03.040 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:03.040 "hdgst": false, 00:18:03.040 "ddgst": false 00:18:03.040 } 00:18:03.040 }, 00:18:03.040 { 00:18:03.040 "method": "bdev_nvme_set_hotplug", 00:18:03.040 "params": { 00:18:03.040 "period_us": 100000, 00:18:03.040 "enable": false 00:18:03.040 } 00:18:03.040 }, 00:18:03.040 { 00:18:03.040 "method": "bdev_enable_histogram", 00:18:03.040 "params": { 00:18:03.040 "name": "nvme0n1", 00:18:03.040 "enable": true 00:18:03.040 } 00:18:03.040 }, 00:18:03.040 { 00:18:03.040 "method": "bdev_wait_for_examine" 00:18:03.040 } 00:18:03.040 ] 00:18:03.040 }, 00:18:03.040 { 00:18:03.040 "subsystem": "nbd", 00:18:03.040 "config": [] 00:18:03.040 } 00:18:03.040 ] 00:18:03.040 }' 00:18:03.040 22:09:45 -- common/autotest_common.sh@10 -- # set +x 00:18:03.040 [2024-04-24 22:09:45.171962] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:18:03.040 [2024-04-24 22:09:45.172046] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3965273 ] 00:18:03.040 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.040 [2024-04-24 22:09:45.239609] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.299 [2024-04-24 22:09:45.358817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.299 [2024-04-24 22:09:45.529643] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:04.234 22:09:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:04.234 22:09:46 -- common/autotest_common.sh@850 -- # return 0 00:18:04.234 22:09:46 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:04.234 22:09:46 -- target/tls.sh@275 -- # jq -r '.[].name' 00:18:04.491 22:09:46 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.491 22:09:46 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:04.749 Running I/O for 1 seconds... 00:18:05.682 00:18:05.682 Latency(us) 00:18:05.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.682 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:05.682 Verification LBA range: start 0x0 length 0x2000 00:18:05.682 nvme0n1 : 1.04 2929.80 11.44 0.00 0.00 42919.60 11505.21 78060.66 00:18:05.682 =================================================================================================================== 00:18:05.682 Total : 2929.80 11.44 0.00 0.00 42919.60 11505.21 78060.66 00:18:05.682 0 00:18:05.682 22:09:47 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:18:05.682 22:09:47 -- target/tls.sh@279 -- # cleanup 00:18:05.682 22:09:47 -- target/tls.sh@15 -- # process_shm --id 0 00:18:05.682 22:09:47 -- common/autotest_common.sh@794 -- # type=--id 00:18:05.682 22:09:47 -- common/autotest_common.sh@795 -- # id=0 00:18:05.682 22:09:47 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:18:05.682 22:09:47 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:05.682 22:09:47 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:18:05.682 22:09:47 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:18:05.682 22:09:47 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:18:05.682 22:09:47 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:05.682 nvmf_trace.0 00:18:05.940 22:09:47 -- common/autotest_common.sh@809 -- # return 0 00:18:05.940 22:09:47 -- target/tls.sh@16 -- # killprocess 3965273 00:18:05.940 22:09:47 -- common/autotest_common.sh@936 -- # '[' -z 3965273 ']' 00:18:05.940 22:09:47 -- common/autotest_common.sh@940 -- # kill -0 3965273 00:18:05.940 22:09:47 -- common/autotest_common.sh@941 -- # uname 00:18:05.940 22:09:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:05.940 22:09:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3965273 00:18:05.940 22:09:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:05.940 22:09:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:05.940 22:09:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3965273' 00:18:05.940 killing process with pid 3965273 00:18:05.940 22:09:48 -- common/autotest_common.sh@955 -- # kill 3965273 00:18:05.940 Received shutdown signal, test time was about 1.000000 seconds 00:18:05.940 00:18:05.940 Latency(us) 00:18:05.940 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.940 =================================================================================================================== 00:18:05.940 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:05.940 22:09:48 -- common/autotest_common.sh@960 -- # wait 3965273 00:18:06.199 22:09:48 -- target/tls.sh@17 -- # nvmftestfini 00:18:06.199 22:09:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:06.199 22:09:48 -- nvmf/common.sh@117 -- # sync 00:18:06.199 22:09:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:06.199 22:09:48 -- nvmf/common.sh@120 -- # set +e 00:18:06.199 22:09:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:06.199 22:09:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:06.199 rmmod nvme_tcp 00:18:06.199 rmmod nvme_fabrics 00:18:06.199 rmmod nvme_keyring 00:18:06.199 22:09:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:06.199 22:09:48 -- nvmf/common.sh@124 -- # set -e 00:18:06.199 22:09:48 -- nvmf/common.sh@125 -- # return 0 00:18:06.199 22:09:48 -- nvmf/common.sh@478 -- # '[' -n 3965123 ']' 00:18:06.199 22:09:48 -- nvmf/common.sh@479 -- # killprocess 3965123 00:18:06.199 22:09:48 -- common/autotest_common.sh@936 -- # '[' -z 3965123 ']' 00:18:06.199 22:09:48 -- common/autotest_common.sh@940 -- # kill -0 3965123 00:18:06.199 22:09:48 -- common/autotest_common.sh@941 -- # uname 00:18:06.199 22:09:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:06.199 22:09:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3965123 00:18:06.199 22:09:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:06.199 22:09:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:06.199 22:09:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3965123' 00:18:06.199 killing process with pid 3965123 00:18:06.199 22:09:48 -- common/autotest_common.sh@955 -- # kill 3965123 00:18:06.199 [2024-04-24 22:09:48.403093] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:06.199 22:09:48 -- common/autotest_common.sh@960 -- # wait 3965123 00:18:06.457 22:09:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:06.457 22:09:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:06.457 22:09:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:06.457 22:09:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:06.457 22:09:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:06.457 22:09:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.457 22:09:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:06.457 22:09:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.989 22:09:50 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:08.989 22:09:50 -- target/tls.sh@18 -- # rm -f /tmp/tmp.DCHgTHohQ3 /tmp/tmp.3M1LyAddSE /tmp/tmp.LsrurwB9IF 00:18:08.989 00:18:08.989 real 1m29.747s 00:18:08.989 user 2m29.164s 00:18:08.989 sys 0m30.110s 00:18:08.989 22:09:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:08.989 22:09:50 -- common/autotest_common.sh@10 -- # set +x 00:18:08.989 ************************************ 00:18:08.989 END TEST nvmf_tls 00:18:08.989 ************************************ 00:18:08.989 22:09:50 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:08.989 22:09:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:08.989 22:09:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:08.989 22:09:50 -- common/autotest_common.sh@10 -- # set +x 00:18:08.989 ************************************ 00:18:08.989 START TEST nvmf_fips 00:18:08.989 ************************************ 00:18:08.989 22:09:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:08.989 * Looking for test storage... 00:18:08.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:08.989 22:09:50 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:08.989 22:09:50 -- nvmf/common.sh@7 -- # uname -s 00:18:08.989 22:09:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.989 22:09:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.989 22:09:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.989 22:09:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.989 22:09:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.989 22:09:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.989 22:09:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.989 22:09:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.989 22:09:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.989 22:09:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.989 22:09:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:08.989 22:09:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:08.989 22:09:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.989 22:09:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.989 22:09:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:08.989 22:09:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:08.989 22:09:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:08.989 22:09:50 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.989 22:09:50 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.989 22:09:50 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.989 22:09:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.989 22:09:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.989 22:09:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.989 22:09:50 -- paths/export.sh@5 -- # export PATH 00:18:08.989 22:09:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.989 22:09:50 -- nvmf/common.sh@47 -- # : 0 00:18:08.989 22:09:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:08.989 22:09:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:08.989 22:09:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:08.989 22:09:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.989 22:09:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.989 22:09:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:08.989 22:09:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:08.989 22:09:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:08.989 22:09:50 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:08.989 22:09:50 -- fips/fips.sh@89 -- # check_openssl_version 00:18:08.989 22:09:50 -- fips/fips.sh@83 -- # local target=3.0.0 00:18:08.989 22:09:50 -- fips/fips.sh@85 -- # openssl version 00:18:08.989 22:09:50 -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:08.989 22:09:50 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:08.989 22:09:50 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:08.989 22:09:50 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:08.989 22:09:50 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:08.989 22:09:50 -- scripts/common.sh@333 -- # IFS=.-: 00:18:08.989 22:09:50 -- scripts/common.sh@333 -- # read -ra ver1 00:18:08.989 22:09:50 -- scripts/common.sh@334 -- # IFS=.-: 00:18:08.989 22:09:50 -- scripts/common.sh@334 -- # read -ra ver2 00:18:08.989 22:09:50 -- scripts/common.sh@335 -- # local 'op=>=' 00:18:08.989 22:09:50 -- scripts/common.sh@337 -- # ver1_l=3 00:18:08.989 22:09:50 -- scripts/common.sh@338 -- # ver2_l=3 00:18:08.989 22:09:50 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:08.989 22:09:50 -- scripts/common.sh@341 -- # case "$op" in 00:18:08.989 22:09:50 -- scripts/common.sh@345 -- # : 1 00:18:08.989 22:09:50 -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:08.989 22:09:50 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:08.989 22:09:50 -- scripts/common.sh@362 -- # decimal 3 00:18:08.989 22:09:50 -- scripts/common.sh@350 -- # local d=3 00:18:08.989 22:09:50 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:08.989 22:09:50 -- scripts/common.sh@352 -- # echo 3 00:18:08.989 22:09:50 -- scripts/common.sh@362 -- # ver1[v]=3 00:18:08.989 22:09:50 -- scripts/common.sh@363 -- # decimal 3 00:18:08.989 22:09:50 -- scripts/common.sh@350 -- # local d=3 00:18:08.989 22:09:50 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:08.989 22:09:50 -- scripts/common.sh@352 -- # echo 3 00:18:08.989 22:09:50 -- scripts/common.sh@363 -- # ver2[v]=3 00:18:08.989 22:09:50 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:08.989 22:09:50 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:08.990 22:09:50 -- scripts/common.sh@361 -- # (( v++ )) 00:18:08.990 22:09:50 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:08.990 22:09:50 -- scripts/common.sh@362 -- # decimal 0 00:18:08.990 22:09:50 -- scripts/common.sh@350 -- # local d=0 00:18:08.990 22:09:50 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:08.990 22:09:50 -- scripts/common.sh@352 -- # echo 0 00:18:08.990 22:09:50 -- scripts/common.sh@362 -- # ver1[v]=0 00:18:08.990 22:09:50 -- scripts/common.sh@363 -- # decimal 0 00:18:08.990 22:09:50 -- scripts/common.sh@350 -- # local d=0 00:18:08.990 22:09:50 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:08.990 22:09:50 -- scripts/common.sh@352 -- # echo 0 00:18:08.990 22:09:50 -- scripts/common.sh@363 -- # ver2[v]=0 00:18:08.990 22:09:50 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:08.990 22:09:50 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:08.990 22:09:50 -- scripts/common.sh@361 -- # (( v++ )) 00:18:08.990 22:09:50 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:08.990 22:09:50 -- scripts/common.sh@362 -- # decimal 9 00:18:08.990 22:09:50 -- scripts/common.sh@350 -- # local d=9 00:18:08.990 22:09:51 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:08.990 22:09:51 -- scripts/common.sh@352 -- # echo 9 00:18:08.990 22:09:51 -- scripts/common.sh@362 -- # ver1[v]=9 00:18:08.990 22:09:51 -- scripts/common.sh@363 -- # decimal 0 00:18:08.990 22:09:51 -- scripts/common.sh@350 -- # local d=0 00:18:08.990 22:09:51 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:08.990 22:09:51 -- scripts/common.sh@352 -- # echo 0 00:18:08.990 22:09:51 -- scripts/common.sh@363 -- # ver2[v]=0 00:18:08.990 22:09:51 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:08.990 22:09:51 -- scripts/common.sh@364 -- # return 0 00:18:08.990 22:09:51 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:08.990 22:09:51 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:08.990 22:09:51 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:08.990 22:09:51 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:08.990 22:09:51 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:08.990 22:09:51 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:08.990 22:09:51 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:08.990 22:09:51 -- fips/fips.sh@113 -- # build_openssl_config 00:18:08.990 22:09:51 -- fips/fips.sh@37 -- # cat 00:18:08.990 22:09:51 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:08.990 22:09:51 -- fips/fips.sh@58 -- # cat - 00:18:08.990 22:09:51 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:08.990 22:09:51 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:08.990 22:09:51 -- fips/fips.sh@116 -- # mapfile -t providers 00:18:08.990 22:09:51 -- fips/fips.sh@116 -- # openssl list -providers 00:18:08.990 22:09:51 -- fips/fips.sh@116 -- # grep name 00:18:08.990 22:09:51 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:08.990 22:09:51 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:08.990 22:09:51 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:08.990 22:09:51 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:08.990 22:09:51 -- fips/fips.sh@127 -- # : 00:18:08.990 22:09:51 -- common/autotest_common.sh@638 -- # local es=0 00:18:08.990 22:09:51 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:08.990 22:09:51 -- common/autotest_common.sh@626 -- # local arg=openssl 00:18:08.990 22:09:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:08.990 22:09:51 -- common/autotest_common.sh@630 -- # type -t openssl 00:18:08.990 22:09:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:08.990 22:09:51 -- common/autotest_common.sh@632 -- # type -P openssl 00:18:08.990 22:09:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:08.990 22:09:51 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:18:08.990 22:09:51 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:18:08.990 22:09:51 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:18:08.990 Error setting digest 00:18:08.990 00F2391D357F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:08.990 00F2391D357F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:08.990 22:09:51 -- common/autotest_common.sh@641 -- # es=1 00:18:08.990 22:09:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:08.990 22:09:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:08.990 22:09:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:08.990 22:09:51 -- fips/fips.sh@130 -- # nvmftestinit 00:18:08.990 22:09:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:08.990 22:09:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:08.990 22:09:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:08.990 22:09:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:08.990 22:09:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:08.990 22:09:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.990 22:09:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:08.990 22:09:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.990 22:09:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:08.990 22:09:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:08.990 22:09:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:08.990 22:09:51 -- common/autotest_common.sh@10 -- # set +x 00:18:11.521 22:09:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:11.521 22:09:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:11.521 22:09:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:11.521 22:09:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:11.521 22:09:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:11.521 22:09:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:11.521 22:09:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:11.521 22:09:53 -- nvmf/common.sh@295 -- # net_devs=() 00:18:11.521 22:09:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:11.521 22:09:53 -- nvmf/common.sh@296 -- # e810=() 00:18:11.521 22:09:53 -- nvmf/common.sh@296 -- # local -ga e810 00:18:11.521 22:09:53 -- nvmf/common.sh@297 -- # x722=() 00:18:11.521 22:09:53 -- nvmf/common.sh@297 -- # local -ga x722 00:18:11.521 22:09:53 -- nvmf/common.sh@298 -- # mlx=() 00:18:11.521 22:09:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:11.521 22:09:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:11.521 22:09:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:11.521 22:09:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:11.521 22:09:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:11.521 22:09:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:11.521 22:09:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:11.521 22:09:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:11.521 22:09:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:11.521 22:09:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:11.521 22:09:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:11.521 22:09:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:11.521 22:09:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:11.521 22:09:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:11.521 22:09:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:11.521 22:09:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:11.521 22:09:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:11.521 22:09:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:11.521 22:09:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:11.521 22:09:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:11.521 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:11.521 22:09:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:11.521 22:09:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:11.521 22:09:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.521 22:09:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.521 22:09:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:11.521 22:09:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:11.521 22:09:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:11.521 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:11.521 22:09:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:11.521 22:09:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:11.521 22:09:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.521 22:09:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.521 22:09:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:11.521 22:09:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:11.521 22:09:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:11.521 22:09:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:11.521 22:09:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:11.521 22:09:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.521 22:09:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:11.521 22:09:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.521 22:09:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:11.521 Found net devices under 0000:84:00.0: cvl_0_0 00:18:11.521 22:09:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.521 22:09:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:11.521 22:09:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.521 22:09:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:11.521 22:09:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.521 22:09:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:11.521 Found net devices under 0000:84:00.1: cvl_0_1 00:18:11.521 22:09:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.521 22:09:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:11.521 22:09:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:11.521 22:09:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:11.521 22:09:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:11.521 22:09:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:11.521 22:09:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:11.521 22:09:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:11.521 22:09:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:11.521 22:09:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:11.521 22:09:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:11.521 22:09:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:11.521 22:09:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:11.521 22:09:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:11.521 22:09:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:11.521 22:09:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:11.521 22:09:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:11.521 22:09:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:11.521 22:09:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:11.521 22:09:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:11.521 22:09:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:11.521 22:09:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:11.521 22:09:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:11.521 22:09:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:11.521 22:09:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:11.521 22:09:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:11.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:11.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:18:11.521 00:18:11.521 --- 10.0.0.2 ping statistics --- 00:18:11.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.521 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:18:11.521 22:09:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:11.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:11.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:18:11.521 00:18:11.521 --- 10.0.0.1 ping statistics --- 00:18:11.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.521 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:11.521 22:09:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:11.521 22:09:53 -- nvmf/common.sh@411 -- # return 0 00:18:11.521 22:09:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:11.521 22:09:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:11.521 22:09:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:11.521 22:09:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:11.521 22:09:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:11.521 22:09:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:11.521 22:09:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:11.779 22:09:53 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:11.779 22:09:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:11.779 22:09:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:11.779 22:09:53 -- common/autotest_common.sh@10 -- # set +x 00:18:11.779 22:09:53 -- nvmf/common.sh@470 -- # nvmfpid=3967784 00:18:11.779 22:09:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:11.779 22:09:53 -- nvmf/common.sh@471 -- # waitforlisten 3967784 00:18:11.779 22:09:53 -- common/autotest_common.sh@817 -- # '[' -z 3967784 ']' 00:18:11.779 22:09:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.779 22:09:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:11.779 22:09:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.779 22:09:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:11.779 22:09:53 -- common/autotest_common.sh@10 -- # set +x 00:18:11.779 [2024-04-24 22:09:53.871338] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:18:11.779 [2024-04-24 22:09:53.871450] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.779 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.779 [2024-04-24 22:09:53.950554] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.037 [2024-04-24 22:09:54.069027] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:12.037 [2024-04-24 22:09:54.069087] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:12.037 [2024-04-24 22:09:54.069103] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:12.037 [2024-04-24 22:09:54.069117] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:12.037 [2024-04-24 22:09:54.069129] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:12.037 [2024-04-24 22:09:54.069170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.972 22:09:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:12.972 22:09:54 -- common/autotest_common.sh@850 -- # return 0 00:18:12.972 22:09:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:12.972 22:09:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:12.972 22:09:54 -- common/autotest_common.sh@10 -- # set +x 00:18:12.972 22:09:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.972 22:09:54 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:12.972 22:09:54 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:12.972 22:09:54 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:12.972 22:09:54 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:12.972 22:09:54 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:12.972 22:09:54 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:12.972 22:09:54 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:12.972 22:09:54 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:13.230 [2024-04-24 22:09:55.247737] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:13.230 [2024-04-24 22:09:55.263690] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:13.230 [2024-04-24 22:09:55.263768] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:13.230 [2024-04-24 22:09:55.263987] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.230 [2024-04-24 22:09:55.296548] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:13.230 malloc0 00:18:13.230 22:09:55 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.230 22:09:55 -- fips/fips.sh@147 -- # bdevperf_pid=3967938 00:18:13.230 22:09:55 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:13.230 22:09:55 -- fips/fips.sh@148 -- # waitforlisten 3967938 /var/tmp/bdevperf.sock 00:18:13.230 22:09:55 -- common/autotest_common.sh@817 -- # '[' -z 3967938 ']' 00:18:13.230 22:09:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.230 22:09:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:13.230 22:09:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.230 22:09:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:13.230 22:09:55 -- common/autotest_common.sh@10 -- # set +x 00:18:13.230 [2024-04-24 22:09:55.389225] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:18:13.230 [2024-04-24 22:09:55.389317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3967938 ] 00:18:13.230 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.230 [2024-04-24 22:09:55.452524] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.489 [2024-04-24 22:09:55.575434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.423 22:09:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:14.423 22:09:56 -- common/autotest_common.sh@850 -- # return 0 00:18:14.423 22:09:56 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:14.681 [2024-04-24 22:09:56.911770] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:14.681 [2024-04-24 22:09:56.911927] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:14.940 TLSTESTn1 00:18:14.940 22:09:57 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:14.940 Running I/O for 10 seconds... 00:18:24.989 00:18:24.989 Latency(us) 00:18:24.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.989 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:24.989 Verification LBA range: start 0x0 length 0x2000 00:18:24.989 TLSTESTn1 : 10.03 3151.17 12.31 0.00 0.00 40534.00 8349.77 66409.81 00:18:24.989 =================================================================================================================== 00:18:24.989 Total : 3151.17 12.31 0.00 0.00 40534.00 8349.77 66409.81 00:18:24.989 0 00:18:24.989 22:10:07 -- fips/fips.sh@1 -- # cleanup 00:18:24.989 22:10:07 -- fips/fips.sh@15 -- # process_shm --id 0 00:18:24.989 22:10:07 -- common/autotest_common.sh@794 -- # type=--id 00:18:24.989 22:10:07 -- common/autotest_common.sh@795 -- # id=0 00:18:24.989 22:10:07 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:18:24.989 22:10:07 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:24.989 22:10:07 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:18:24.989 22:10:07 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:18:24.989 22:10:07 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:18:24.990 22:10:07 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:24.990 nvmf_trace.0 00:18:24.990 22:10:07 -- common/autotest_common.sh@809 -- # return 0 00:18:24.990 22:10:07 -- fips/fips.sh@16 -- # killprocess 3967938 00:18:24.990 22:10:07 -- common/autotest_common.sh@936 -- # '[' -z 3967938 ']' 00:18:24.990 22:10:07 -- common/autotest_common.sh@940 -- # kill -0 3967938 00:18:24.990 22:10:07 -- common/autotest_common.sh@941 -- # uname 00:18:24.990 22:10:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:24.990 22:10:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3967938 00:18:25.247 22:10:07 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:25.247 22:10:07 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:25.247 22:10:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3967938' 00:18:25.247 killing process with pid 3967938 00:18:25.247 22:10:07 -- common/autotest_common.sh@955 -- # kill 3967938 00:18:25.247 Received shutdown signal, test time was about 10.000000 seconds 00:18:25.247 00:18:25.247 Latency(us) 00:18:25.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.247 =================================================================================================================== 00:18:25.247 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:25.248 [2024-04-24 22:10:07.256069] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:25.248 22:10:07 -- common/autotest_common.sh@960 -- # wait 3967938 00:18:25.506 22:10:07 -- fips/fips.sh@17 -- # nvmftestfini 00:18:25.506 22:10:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:25.506 22:10:07 -- nvmf/common.sh@117 -- # sync 00:18:25.506 22:10:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:25.506 22:10:07 -- nvmf/common.sh@120 -- # set +e 00:18:25.506 22:10:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:25.506 22:10:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:25.506 rmmod nvme_tcp 00:18:25.506 rmmod nvme_fabrics 00:18:25.506 rmmod nvme_keyring 00:18:25.506 22:10:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:25.506 22:10:07 -- nvmf/common.sh@124 -- # set -e 00:18:25.506 22:10:07 -- nvmf/common.sh@125 -- # return 0 00:18:25.506 22:10:07 -- nvmf/common.sh@478 -- # '[' -n 3967784 ']' 00:18:25.506 22:10:07 -- nvmf/common.sh@479 -- # killprocess 3967784 00:18:25.506 22:10:07 -- common/autotest_common.sh@936 -- # '[' -z 3967784 ']' 00:18:25.506 22:10:07 -- common/autotest_common.sh@940 -- # kill -0 3967784 00:18:25.506 22:10:07 -- common/autotest_common.sh@941 -- # uname 00:18:25.506 22:10:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:25.506 22:10:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3967784 00:18:25.506 22:10:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:25.506 22:10:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:25.506 22:10:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3967784' 00:18:25.506 killing process with pid 3967784 00:18:25.506 22:10:07 -- common/autotest_common.sh@955 -- # kill 3967784 00:18:25.506 [2024-04-24 22:10:07.587055] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:25.506 [2024-04-24 22:10:07.587108] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:25.506 22:10:07 -- common/autotest_common.sh@960 -- # wait 3967784 00:18:25.764 22:10:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:25.764 22:10:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:25.764 22:10:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:25.764 22:10:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:25.764 22:10:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:25.764 22:10:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.764 22:10:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:25.764 22:10:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.292 22:10:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:28.292 22:10:09 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:28.292 00:18:28.292 real 0m19.026s 00:18:28.292 user 0m24.806s 00:18:28.292 sys 0m6.568s 00:18:28.292 22:10:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:28.293 22:10:09 -- common/autotest_common.sh@10 -- # set +x 00:18:28.293 ************************************ 00:18:28.293 END TEST nvmf_fips 00:18:28.293 ************************************ 00:18:28.293 22:10:09 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:18:28.293 22:10:09 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:18:28.293 22:10:09 -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:18:28.293 22:10:09 -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:18:28.293 22:10:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:28.293 22:10:09 -- common/autotest_common.sh@10 -- # set +x 00:18:30.191 22:10:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:30.191 22:10:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:30.191 22:10:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:30.191 22:10:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:30.191 22:10:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:30.191 22:10:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:30.191 22:10:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:30.191 22:10:12 -- nvmf/common.sh@295 -- # net_devs=() 00:18:30.191 22:10:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:30.191 22:10:12 -- nvmf/common.sh@296 -- # e810=() 00:18:30.191 22:10:12 -- nvmf/common.sh@296 -- # local -ga e810 00:18:30.191 22:10:12 -- nvmf/common.sh@297 -- # x722=() 00:18:30.191 22:10:12 -- nvmf/common.sh@297 -- # local -ga x722 00:18:30.191 22:10:12 -- nvmf/common.sh@298 -- # mlx=() 00:18:30.191 22:10:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:30.191 22:10:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:30.192 22:10:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:30.192 22:10:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:30.192 22:10:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:30.192 22:10:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:30.192 22:10:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:30.192 22:10:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:30.192 22:10:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:30.192 22:10:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:30.192 22:10:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:30.192 22:10:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:30.192 22:10:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:30.192 22:10:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:30.192 22:10:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:30.192 22:10:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:30.192 22:10:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:30.192 22:10:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:30.192 22:10:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:30.192 22:10:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:30.192 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:30.192 22:10:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:30.192 22:10:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:30.192 22:10:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:30.192 22:10:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:30.192 22:10:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:30.192 22:10:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:30.192 22:10:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:30.192 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:30.192 22:10:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:30.192 22:10:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:30.192 22:10:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:30.192 22:10:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:30.192 22:10:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:30.192 22:10:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:30.192 22:10:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:30.192 22:10:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:30.192 22:10:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:30.192 22:10:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:30.192 22:10:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:30.192 22:10:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:30.192 22:10:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:30.192 Found net devices under 0000:84:00.0: cvl_0_0 00:18:30.192 22:10:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:30.192 22:10:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:30.192 22:10:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:30.192 22:10:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:30.192 22:10:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:30.192 22:10:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:30.192 Found net devices under 0000:84:00.1: cvl_0_1 00:18:30.192 22:10:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:30.192 22:10:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:30.192 22:10:12 -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:30.192 22:10:12 -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:18:30.192 22:10:12 -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:30.192 22:10:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:30.192 22:10:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:30.192 22:10:12 -- common/autotest_common.sh@10 -- # set +x 00:18:30.451 ************************************ 00:18:30.451 START TEST nvmf_perf_adq 00:18:30.451 ************************************ 00:18:30.451 22:10:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:30.451 * Looking for test storage... 00:18:30.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:30.451 22:10:12 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:30.451 22:10:12 -- nvmf/common.sh@7 -- # uname -s 00:18:30.451 22:10:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:30.451 22:10:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:30.451 22:10:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:30.451 22:10:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:30.451 22:10:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:30.451 22:10:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:30.451 22:10:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:30.451 22:10:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:30.451 22:10:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:30.451 22:10:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:30.451 22:10:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:30.451 22:10:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:30.451 22:10:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:30.451 22:10:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:30.451 22:10:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:30.451 22:10:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:30.451 22:10:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:30.451 22:10:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:30.451 22:10:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:30.451 22:10:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:30.451 22:10:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.451 22:10:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.451 22:10:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.451 22:10:12 -- paths/export.sh@5 -- # export PATH 00:18:30.451 22:10:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.451 22:10:12 -- nvmf/common.sh@47 -- # : 0 00:18:30.451 22:10:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:30.451 22:10:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:30.451 22:10:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:30.451 22:10:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:30.451 22:10:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:30.451 22:10:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:30.451 22:10:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:30.451 22:10:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:30.451 22:10:12 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:18:30.451 22:10:12 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:30.451 22:10:12 -- common/autotest_common.sh@10 -- # set +x 00:18:33.003 22:10:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:33.003 22:10:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:33.003 22:10:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:33.003 22:10:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:33.003 22:10:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:33.003 22:10:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:33.003 22:10:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:33.003 22:10:14 -- nvmf/common.sh@295 -- # net_devs=() 00:18:33.003 22:10:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:33.003 22:10:14 -- nvmf/common.sh@296 -- # e810=() 00:18:33.003 22:10:14 -- nvmf/common.sh@296 -- # local -ga e810 00:18:33.003 22:10:14 -- nvmf/common.sh@297 -- # x722=() 00:18:33.003 22:10:14 -- nvmf/common.sh@297 -- # local -ga x722 00:18:33.003 22:10:14 -- nvmf/common.sh@298 -- # mlx=() 00:18:33.003 22:10:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:33.004 22:10:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:33.004 22:10:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:33.004 22:10:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:33.004 22:10:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:33.004 22:10:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:33.004 22:10:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:33.004 22:10:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:33.004 22:10:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:33.004 22:10:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:33.004 22:10:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:33.004 22:10:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:33.004 22:10:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:33.004 22:10:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:33.004 22:10:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:33.004 22:10:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:33.004 22:10:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:33.004 22:10:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:33.004 22:10:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:33.004 22:10:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:33.004 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:33.004 22:10:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:33.004 22:10:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:33.004 22:10:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.004 22:10:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.004 22:10:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:33.004 22:10:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:33.004 22:10:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:33.004 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:33.004 22:10:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:33.004 22:10:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:33.004 22:10:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.004 22:10:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.004 22:10:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:33.004 22:10:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:33.004 22:10:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:33.004 22:10:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:33.004 22:10:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:33.004 22:10:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.004 22:10:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:33.004 22:10:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.004 22:10:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:33.004 Found net devices under 0000:84:00.0: cvl_0_0 00:18:33.004 22:10:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.004 22:10:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:33.004 22:10:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.004 22:10:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:33.004 22:10:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.004 22:10:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:33.004 Found net devices under 0000:84:00.1: cvl_0_1 00:18:33.004 22:10:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.004 22:10:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:33.004 22:10:14 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:33.004 22:10:14 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:18:33.004 22:10:14 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:18:33.004 22:10:14 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:18:33.004 22:10:14 -- target/perf_adq.sh@52 -- # rmmod ice 00:18:33.263 22:10:15 -- target/perf_adq.sh@53 -- # modprobe ice 00:18:35.166 22:10:17 -- target/perf_adq.sh@54 -- # sleep 5 00:18:40.433 22:10:22 -- target/perf_adq.sh@67 -- # nvmftestinit 00:18:40.433 22:10:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:40.433 22:10:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:40.433 22:10:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:40.433 22:10:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:40.433 22:10:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:40.433 22:10:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.433 22:10:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:40.433 22:10:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.433 22:10:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:40.433 22:10:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:40.433 22:10:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:40.433 22:10:22 -- common/autotest_common.sh@10 -- # set +x 00:18:40.433 22:10:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:40.433 22:10:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:40.433 22:10:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:40.433 22:10:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:40.433 22:10:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:40.433 22:10:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:40.433 22:10:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:40.433 22:10:22 -- nvmf/common.sh@295 -- # net_devs=() 00:18:40.433 22:10:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:40.433 22:10:22 -- nvmf/common.sh@296 -- # e810=() 00:18:40.433 22:10:22 -- nvmf/common.sh@296 -- # local -ga e810 00:18:40.433 22:10:22 -- nvmf/common.sh@297 -- # x722=() 00:18:40.433 22:10:22 -- nvmf/common.sh@297 -- # local -ga x722 00:18:40.433 22:10:22 -- nvmf/common.sh@298 -- # mlx=() 00:18:40.433 22:10:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:40.433 22:10:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:40.433 22:10:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:40.433 22:10:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:40.433 22:10:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:40.433 22:10:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:40.433 22:10:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:40.433 22:10:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:40.433 22:10:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:40.433 22:10:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:40.433 22:10:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:40.433 22:10:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:40.433 22:10:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:40.433 22:10:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:40.433 22:10:22 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:40.433 22:10:22 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:40.433 22:10:22 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:40.433 22:10:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:40.433 22:10:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.433 22:10:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:40.433 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:40.433 22:10:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:40.433 22:10:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:40.433 22:10:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.433 22:10:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.433 22:10:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:40.433 22:10:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.433 22:10:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:40.433 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:40.433 22:10:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:40.433 22:10:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:40.433 22:10:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.433 22:10:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.433 22:10:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:40.433 22:10:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:40.433 22:10:22 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:40.433 22:10:22 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:40.433 22:10:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.433 22:10:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.433 22:10:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:40.433 22:10:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.433 22:10:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:40.433 Found net devices under 0000:84:00.0: cvl_0_0 00:18:40.433 22:10:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.433 22:10:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.433 22:10:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.433 22:10:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:40.433 22:10:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.433 22:10:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:40.433 Found net devices under 0000:84:00.1: cvl_0_1 00:18:40.433 22:10:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.433 22:10:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:40.433 22:10:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:40.433 22:10:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:40.433 22:10:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:40.433 22:10:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:40.433 22:10:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.433 22:10:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:40.433 22:10:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:40.433 22:10:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:40.433 22:10:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:40.434 22:10:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:40.434 22:10:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:40.434 22:10:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:40.434 22:10:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.434 22:10:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:40.434 22:10:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:40.434 22:10:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:40.434 22:10:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:40.434 22:10:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:40.434 22:10:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:40.434 22:10:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:40.434 22:10:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:40.434 22:10:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:40.434 22:10:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:40.434 22:10:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:40.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:40.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:18:40.434 00:18:40.434 --- 10.0.0.2 ping statistics --- 00:18:40.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.434 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:18:40.434 22:10:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:40.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:40.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:18:40.434 00:18:40.434 --- 10.0.0.1 ping statistics --- 00:18:40.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.434 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:18:40.434 22:10:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:40.434 22:10:22 -- nvmf/common.sh@411 -- # return 0 00:18:40.434 22:10:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:40.434 22:10:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:40.434 22:10:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:40.434 22:10:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:40.434 22:10:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:40.434 22:10:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:40.434 22:10:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:40.434 22:10:22 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:40.434 22:10:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:40.434 22:10:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:40.434 22:10:22 -- common/autotest_common.sh@10 -- # set +x 00:18:40.434 22:10:22 -- nvmf/common.sh@470 -- # nvmfpid=3973979 00:18:40.434 22:10:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:40.434 22:10:22 -- nvmf/common.sh@471 -- # waitforlisten 3973979 00:18:40.434 22:10:22 -- common/autotest_common.sh@817 -- # '[' -z 3973979 ']' 00:18:40.434 22:10:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.434 22:10:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:40.434 22:10:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.434 22:10:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:40.434 22:10:22 -- common/autotest_common.sh@10 -- # set +x 00:18:40.434 [2024-04-24 22:10:22.332591] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:18:40.434 [2024-04-24 22:10:22.332681] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.434 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.434 [2024-04-24 22:10:22.407638] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:40.434 [2024-04-24 22:10:22.529604] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:40.434 [2024-04-24 22:10:22.529671] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:40.434 [2024-04-24 22:10:22.529688] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:40.434 [2024-04-24 22:10:22.529702] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:40.434 [2024-04-24 22:10:22.529725] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:40.434 [2024-04-24 22:10:22.529808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.434 [2024-04-24 22:10:22.529864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:40.434 [2024-04-24 22:10:22.529913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:40.434 [2024-04-24 22:10:22.529916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.434 22:10:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:40.434 22:10:22 -- common/autotest_common.sh@850 -- # return 0 00:18:40.434 22:10:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:40.434 22:10:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:40.434 22:10:22 -- common/autotest_common.sh@10 -- # set +x 00:18:40.434 22:10:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.434 22:10:22 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:18:40.434 22:10:22 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:18:40.434 22:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:40.434 22:10:22 -- common/autotest_common.sh@10 -- # set +x 00:18:40.434 22:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:40.434 22:10:22 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:18:40.434 22:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:40.434 22:10:22 -- common/autotest_common.sh@10 -- # set +x 00:18:40.690 22:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:40.690 22:10:22 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:18:40.690 22:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:40.690 22:10:22 -- common/autotest_common.sh@10 -- # set +x 00:18:40.690 [2024-04-24 22:10:22.780562] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:40.690 22:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:40.690 22:10:22 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:40.690 22:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:40.690 22:10:22 -- common/autotest_common.sh@10 -- # set +x 00:18:40.690 Malloc1 00:18:40.690 22:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:40.690 22:10:22 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:40.690 22:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:40.690 22:10:22 -- common/autotest_common.sh@10 -- # set +x 00:18:40.690 22:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:40.690 22:10:22 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:40.690 22:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:40.690 22:10:22 -- common/autotest_common.sh@10 -- # set +x 00:18:40.690 22:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:40.690 22:10:22 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:40.690 22:10:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:40.690 22:10:22 -- common/autotest_common.sh@10 -- # set +x 00:18:40.690 [2024-04-24 22:10:22.834434] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:40.690 [2024-04-24 22:10:22.834747] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:40.690 22:10:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:40.690 22:10:22 -- target/perf_adq.sh@73 -- # perfpid=3974005 00:18:40.690 22:10:22 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:40.690 22:10:22 -- target/perf_adq.sh@74 -- # sleep 2 00:18:40.690 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.588 22:10:24 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:18:42.588 22:10:24 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:18:42.846 22:10:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.846 22:10:24 -- target/perf_adq.sh@76 -- # wc -l 00:18:42.846 22:10:24 -- common/autotest_common.sh@10 -- # set +x 00:18:42.846 22:10:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.846 22:10:24 -- target/perf_adq.sh@76 -- # count=4 00:18:42.846 22:10:24 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:18:42.846 22:10:24 -- target/perf_adq.sh@81 -- # wait 3974005 00:18:50.960 Initializing NVMe Controllers 00:18:50.960 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:50.960 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:18:50.960 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:18:50.960 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:18:50.960 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:18:50.960 Initialization complete. Launching workers. 00:18:50.960 ======================================================== 00:18:50.960 Latency(us) 00:18:50.960 Device Information : IOPS MiB/s Average min max 00:18:50.960 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9033.40 35.29 7087.00 2557.68 10198.66 00:18:50.960 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9193.80 35.91 6964.77 5031.24 9831.99 00:18:50.960 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9208.30 35.97 6951.54 3446.66 8490.31 00:18:50.960 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9193.10 35.91 6964.52 2018.25 9267.45 00:18:50.960 ======================================================== 00:18:50.960 Total : 36628.59 143.08 6991.52 2018.25 10198.66 00:18:50.960 00:18:50.960 22:10:33 -- target/perf_adq.sh@82 -- # nvmftestfini 00:18:50.960 22:10:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:50.960 22:10:33 -- nvmf/common.sh@117 -- # sync 00:18:50.960 22:10:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:50.960 22:10:33 -- nvmf/common.sh@120 -- # set +e 00:18:50.960 22:10:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:50.960 22:10:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:50.960 rmmod nvme_tcp 00:18:50.960 rmmod nvme_fabrics 00:18:50.960 rmmod nvme_keyring 00:18:50.960 22:10:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:50.960 22:10:33 -- nvmf/common.sh@124 -- # set -e 00:18:50.960 22:10:33 -- nvmf/common.sh@125 -- # return 0 00:18:50.960 22:10:33 -- nvmf/common.sh@478 -- # '[' -n 3973979 ']' 00:18:50.960 22:10:33 -- nvmf/common.sh@479 -- # killprocess 3973979 00:18:50.960 22:10:33 -- common/autotest_common.sh@936 -- # '[' -z 3973979 ']' 00:18:50.960 22:10:33 -- common/autotest_common.sh@940 -- # kill -0 3973979 00:18:50.960 22:10:33 -- common/autotest_common.sh@941 -- # uname 00:18:50.961 22:10:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:50.961 22:10:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3973979 00:18:50.961 22:10:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:50.961 22:10:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:50.961 22:10:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3973979' 00:18:50.961 killing process with pid 3973979 00:18:50.961 22:10:33 -- common/autotest_common.sh@955 -- # kill 3973979 00:18:50.961 [2024-04-24 22:10:33.135563] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:50.961 22:10:33 -- common/autotest_common.sh@960 -- # wait 3973979 00:18:51.220 22:10:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:51.220 22:10:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:51.220 22:10:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:51.220 22:10:33 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:51.220 22:10:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:51.220 22:10:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.220 22:10:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:51.220 22:10:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.754 22:10:35 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:53.754 22:10:35 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:18:53.754 22:10:35 -- target/perf_adq.sh@52 -- # rmmod ice 00:18:54.012 22:10:36 -- target/perf_adq.sh@53 -- # modprobe ice 00:18:55.503 22:10:37 -- target/perf_adq.sh@54 -- # sleep 5 00:19:00.768 22:10:42 -- target/perf_adq.sh@87 -- # nvmftestinit 00:19:00.768 22:10:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:00.768 22:10:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:00.768 22:10:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:00.768 22:10:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:00.768 22:10:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:00.768 22:10:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.768 22:10:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:00.768 22:10:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.768 22:10:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:00.768 22:10:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:00.768 22:10:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:00.768 22:10:42 -- common/autotest_common.sh@10 -- # set +x 00:19:00.768 22:10:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:00.768 22:10:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:00.768 22:10:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:00.768 22:10:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:00.768 22:10:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:00.768 22:10:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:00.768 22:10:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:00.768 22:10:42 -- nvmf/common.sh@295 -- # net_devs=() 00:19:00.768 22:10:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:00.768 22:10:42 -- nvmf/common.sh@296 -- # e810=() 00:19:00.768 22:10:42 -- nvmf/common.sh@296 -- # local -ga e810 00:19:00.768 22:10:42 -- nvmf/common.sh@297 -- # x722=() 00:19:00.768 22:10:42 -- nvmf/common.sh@297 -- # local -ga x722 00:19:00.768 22:10:42 -- nvmf/common.sh@298 -- # mlx=() 00:19:00.768 22:10:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:00.768 22:10:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:00.768 22:10:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:00.768 22:10:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:00.768 22:10:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:00.768 22:10:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:00.768 22:10:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:00.768 22:10:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:00.768 22:10:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:00.768 22:10:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:00.768 22:10:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:00.768 22:10:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:00.768 22:10:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:00.768 22:10:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:00.768 22:10:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:00.768 22:10:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:00.768 22:10:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:00.768 22:10:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:00.768 22:10:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:00.768 22:10:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:00.768 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:00.768 22:10:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:00.768 22:10:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:00.768 22:10:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.768 22:10:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.768 22:10:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:00.768 22:10:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:00.768 22:10:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:00.768 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:00.768 22:10:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:00.768 22:10:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:00.768 22:10:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.768 22:10:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.768 22:10:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:00.768 22:10:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:00.768 22:10:42 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:00.768 22:10:42 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:00.768 22:10:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:00.768 22:10:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.768 22:10:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:00.768 22:10:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.768 22:10:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:00.768 Found net devices under 0000:84:00.0: cvl_0_0 00:19:00.768 22:10:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.768 22:10:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:00.768 22:10:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.768 22:10:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:00.768 22:10:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.768 22:10:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:00.768 Found net devices under 0000:84:00.1: cvl_0_1 00:19:00.768 22:10:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.768 22:10:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:00.768 22:10:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:00.768 22:10:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:00.768 22:10:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:00.768 22:10:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:00.768 22:10:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:00.768 22:10:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:00.768 22:10:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:00.768 22:10:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:00.768 22:10:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:00.768 22:10:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:00.768 22:10:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:00.768 22:10:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:00.768 22:10:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:00.768 22:10:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:00.768 22:10:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:00.768 22:10:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:00.768 22:10:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:00.768 22:10:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:00.768 22:10:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:00.768 22:10:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:00.768 22:10:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:00.768 22:10:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:00.768 22:10:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:00.768 22:10:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:00.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:00.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:19:00.768 00:19:00.768 --- 10.0.0.2 ping statistics --- 00:19:00.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.768 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:19:00.768 22:10:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:00.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:00.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:19:00.768 00:19:00.768 --- 10.0.0.1 ping statistics --- 00:19:00.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.768 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:19:00.768 22:10:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:00.768 22:10:42 -- nvmf/common.sh@411 -- # return 0 00:19:00.768 22:10:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:00.768 22:10:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:00.768 22:10:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:00.768 22:10:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:00.768 22:10:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:00.768 22:10:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:00.768 22:10:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:00.768 22:10:42 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:19:00.768 22:10:42 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:00.768 22:10:42 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:00.768 22:10:42 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:00.768 net.core.busy_poll = 1 00:19:00.768 22:10:42 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:00.768 net.core.busy_read = 1 00:19:00.768 22:10:42 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:00.768 22:10:42 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:00.768 22:10:42 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:00.768 22:10:42 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:00.769 22:10:42 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:00.769 22:10:42 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:00.769 22:10:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:00.769 22:10:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:00.769 22:10:42 -- common/autotest_common.sh@10 -- # set +x 00:19:00.769 22:10:42 -- nvmf/common.sh@470 -- # nvmfpid=3976614 00:19:00.769 22:10:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:00.769 22:10:42 -- nvmf/common.sh@471 -- # waitforlisten 3976614 00:19:00.769 22:10:42 -- common/autotest_common.sh@817 -- # '[' -z 3976614 ']' 00:19:00.769 22:10:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.769 22:10:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:00.769 22:10:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.769 22:10:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:00.769 22:10:42 -- common/autotest_common.sh@10 -- # set +x 00:19:00.769 [2024-04-24 22:10:42.950383] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:19:00.769 [2024-04-24 22:10:42.950497] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.769 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.026 [2024-04-24 22:10:43.031127] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:01.026 [2024-04-24 22:10:43.153209] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.026 [2024-04-24 22:10:43.153282] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.026 [2024-04-24 22:10:43.153298] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.026 [2024-04-24 22:10:43.153311] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.026 [2024-04-24 22:10:43.153323] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.026 [2024-04-24 22:10:43.153424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.026 [2024-04-24 22:10:43.153456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.026 [2024-04-24 22:10:43.153509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:01.026 [2024-04-24 22:10:43.153512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.285 22:10:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:01.285 22:10:43 -- common/autotest_common.sh@850 -- # return 0 00:19:01.285 22:10:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:01.285 22:10:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:01.285 22:10:43 -- common/autotest_common.sh@10 -- # set +x 00:19:01.285 22:10:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.285 22:10:43 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:19:01.285 22:10:43 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:01.285 22:10:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.285 22:10:43 -- common/autotest_common.sh@10 -- # set +x 00:19:01.285 22:10:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.285 22:10:43 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:19:01.285 22:10:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.285 22:10:43 -- common/autotest_common.sh@10 -- # set +x 00:19:01.285 22:10:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.285 22:10:43 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:01.285 22:10:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.285 22:10:43 -- common/autotest_common.sh@10 -- # set +x 00:19:01.285 [2024-04-24 22:10:43.465536] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.285 22:10:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.285 22:10:43 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:01.285 22:10:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.285 22:10:43 -- common/autotest_common.sh@10 -- # set +x 00:19:01.285 Malloc1 00:19:01.285 22:10:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.285 22:10:43 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:01.285 22:10:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.285 22:10:43 -- common/autotest_common.sh@10 -- # set +x 00:19:01.285 22:10:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.285 22:10:43 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:01.285 22:10:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.285 22:10:43 -- common/autotest_common.sh@10 -- # set +x 00:19:01.285 22:10:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.285 22:10:43 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:01.285 22:10:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.285 22:10:43 -- common/autotest_common.sh@10 -- # set +x 00:19:01.285 [2024-04-24 22:10:43.520050] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:01.285 [2024-04-24 22:10:43.520376] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.285 22:10:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.285 22:10:43 -- target/perf_adq.sh@94 -- # perfpid=3976659 00:19:01.285 22:10:43 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:01.285 22:10:43 -- target/perf_adq.sh@95 -- # sleep 2 00:19:01.544 EAL: No free 2048 kB hugepages reported on node 1 00:19:03.445 22:10:45 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:19:03.445 22:10:45 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:03.445 22:10:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.445 22:10:45 -- common/autotest_common.sh@10 -- # set +x 00:19:03.445 22:10:45 -- target/perf_adq.sh@97 -- # wc -l 00:19:03.445 22:10:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.445 22:10:45 -- target/perf_adq.sh@97 -- # count=2 00:19:03.445 22:10:45 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:19:03.445 22:10:45 -- target/perf_adq.sh@103 -- # wait 3976659 00:19:11.556 Initializing NVMe Controllers 00:19:11.556 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:11.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:11.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:11.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:11.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:11.556 Initialization complete. Launching workers. 00:19:11.556 ======================================================== 00:19:11.556 Latency(us) 00:19:11.556 Device Information : IOPS MiB/s Average min max 00:19:11.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4172.50 16.30 15343.06 2313.96 63332.40 00:19:11.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11592.30 45.28 5537.20 1504.35 46164.92 00:19:11.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4319.80 16.87 14821.23 2095.72 64906.03 00:19:11.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 3895.80 15.22 16434.43 2770.66 64750.74 00:19:11.556 ======================================================== 00:19:11.556 Total : 23980.40 93.67 10686.14 1504.35 64906.03 00:19:11.556 00:19:11.556 22:10:53 -- target/perf_adq.sh@104 -- # nvmftestfini 00:19:11.556 22:10:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:11.556 22:10:53 -- nvmf/common.sh@117 -- # sync 00:19:11.556 22:10:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:11.556 22:10:53 -- nvmf/common.sh@120 -- # set +e 00:19:11.556 22:10:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:11.556 22:10:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:11.556 rmmod nvme_tcp 00:19:11.556 rmmod nvme_fabrics 00:19:11.556 rmmod nvme_keyring 00:19:11.556 22:10:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:11.556 22:10:53 -- nvmf/common.sh@124 -- # set -e 00:19:11.556 22:10:53 -- nvmf/common.sh@125 -- # return 0 00:19:11.556 22:10:53 -- nvmf/common.sh@478 -- # '[' -n 3976614 ']' 00:19:11.556 22:10:53 -- nvmf/common.sh@479 -- # killprocess 3976614 00:19:11.556 22:10:53 -- common/autotest_common.sh@936 -- # '[' -z 3976614 ']' 00:19:11.556 22:10:53 -- common/autotest_common.sh@940 -- # kill -0 3976614 00:19:11.556 22:10:53 -- common/autotest_common.sh@941 -- # uname 00:19:11.556 22:10:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:11.556 22:10:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3976614 00:19:11.556 22:10:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:11.556 22:10:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:11.556 22:10:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3976614' 00:19:11.556 killing process with pid 3976614 00:19:11.556 22:10:53 -- common/autotest_common.sh@955 -- # kill 3976614 00:19:11.556 [2024-04-24 22:10:53.797209] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:11.556 22:10:53 -- common/autotest_common.sh@960 -- # wait 3976614 00:19:12.123 22:10:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:12.123 22:10:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:12.123 22:10:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:12.123 22:10:54 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:12.123 22:10:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:12.123 22:10:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.123 22:10:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:12.123 22:10:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.407 22:10:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:15.407 22:10:57 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:19:15.407 00:19:15.407 real 0m44.709s 00:19:15.407 user 2m41.048s 00:19:15.407 sys 0m10.204s 00:19:15.407 22:10:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:15.407 22:10:57 -- common/autotest_common.sh@10 -- # set +x 00:19:15.407 ************************************ 00:19:15.407 END TEST nvmf_perf_adq 00:19:15.407 ************************************ 00:19:15.408 22:10:57 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:15.408 22:10:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:15.408 22:10:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:15.408 22:10:57 -- common/autotest_common.sh@10 -- # set +x 00:19:15.408 ************************************ 00:19:15.408 START TEST nvmf_shutdown 00:19:15.408 ************************************ 00:19:15.408 22:10:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:15.408 * Looking for test storage... 00:19:15.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:15.408 22:10:57 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:15.408 22:10:57 -- nvmf/common.sh@7 -- # uname -s 00:19:15.408 22:10:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:15.408 22:10:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:15.408 22:10:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:15.408 22:10:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:15.408 22:10:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:15.408 22:10:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:15.408 22:10:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:15.408 22:10:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:15.408 22:10:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:15.408 22:10:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:15.408 22:10:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:15.408 22:10:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:15.408 22:10:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:15.408 22:10:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:15.408 22:10:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:15.408 22:10:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:15.408 22:10:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:15.408 22:10:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:15.408 22:10:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:15.408 22:10:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:15.408 22:10:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.408 22:10:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.408 22:10:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.408 22:10:57 -- paths/export.sh@5 -- # export PATH 00:19:15.408 22:10:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.408 22:10:57 -- nvmf/common.sh@47 -- # : 0 00:19:15.408 22:10:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:15.408 22:10:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:15.408 22:10:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:15.408 22:10:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:15.408 22:10:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:15.408 22:10:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:15.408 22:10:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:15.408 22:10:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:15.408 22:10:57 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:15.408 22:10:57 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:15.408 22:10:57 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:15.408 22:10:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:15.408 22:10:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:15.408 22:10:57 -- common/autotest_common.sh@10 -- # set +x 00:19:15.408 ************************************ 00:19:15.408 START TEST nvmf_shutdown_tc1 00:19:15.408 ************************************ 00:19:15.408 22:10:57 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:19:15.408 22:10:57 -- target/shutdown.sh@74 -- # starttarget 00:19:15.408 22:10:57 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:15.408 22:10:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:15.408 22:10:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:15.408 22:10:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:15.408 22:10:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:15.408 22:10:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:15.408 22:10:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.408 22:10:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:15.408 22:10:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.408 22:10:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:15.408 22:10:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:15.408 22:10:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:15.408 22:10:57 -- common/autotest_common.sh@10 -- # set +x 00:19:17.940 22:10:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:17.940 22:10:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:17.940 22:10:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:17.940 22:10:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:17.940 22:10:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:17.940 22:10:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:17.940 22:10:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:17.940 22:10:59 -- nvmf/common.sh@295 -- # net_devs=() 00:19:17.940 22:10:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:17.940 22:10:59 -- nvmf/common.sh@296 -- # e810=() 00:19:17.940 22:10:59 -- nvmf/common.sh@296 -- # local -ga e810 00:19:17.940 22:10:59 -- nvmf/common.sh@297 -- # x722=() 00:19:17.941 22:10:59 -- nvmf/common.sh@297 -- # local -ga x722 00:19:17.941 22:10:59 -- nvmf/common.sh@298 -- # mlx=() 00:19:17.941 22:10:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:17.941 22:10:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:17.941 22:10:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:17.941 22:10:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:17.941 22:10:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:17.941 22:10:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:17.941 22:10:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:17.941 22:10:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:17.941 22:10:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:17.941 22:10:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:17.941 22:10:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:17.941 22:10:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:17.941 22:10:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:17.941 22:10:59 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:17.941 22:10:59 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:17.941 22:10:59 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:17.941 22:10:59 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:17.941 22:10:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:17.941 22:10:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:17.941 22:10:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:17.941 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:17.941 22:10:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:17.941 22:10:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:17.941 22:10:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:17.941 22:10:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:17.941 22:10:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:17.941 22:10:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:17.941 22:10:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:17.941 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:17.941 22:10:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:17.941 22:10:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:17.941 22:10:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:17.941 22:10:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:17.941 22:10:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:17.941 22:10:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:17.941 22:10:59 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:17.941 22:10:59 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:17.941 22:10:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:17.941 22:10:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.941 22:10:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:17.941 22:10:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.941 22:10:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:17.941 Found net devices under 0000:84:00.0: cvl_0_0 00:19:17.941 22:10:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.941 22:10:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:17.941 22:10:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.941 22:10:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:17.941 22:10:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.941 22:10:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:17.941 Found net devices under 0000:84:00.1: cvl_0_1 00:19:17.941 22:10:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.941 22:10:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:17.941 22:10:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:17.941 22:10:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:17.941 22:10:59 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:17.941 22:10:59 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:17.941 22:10:59 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:17.941 22:10:59 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:17.941 22:10:59 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:17.941 22:10:59 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:17.941 22:10:59 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:17.941 22:10:59 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:17.941 22:10:59 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:17.941 22:10:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:17.941 22:10:59 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:17.941 22:10:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:17.941 22:10:59 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:17.941 22:10:59 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:17.941 22:10:59 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:17.941 22:10:59 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:17.941 22:10:59 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:17.941 22:10:59 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:17.941 22:10:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:17.941 22:10:59 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:17.941 22:10:59 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:17.941 22:10:59 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:17.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:17.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:19:17.941 00:19:17.941 --- 10.0.0.2 ping statistics --- 00:19:17.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.941 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:19:17.941 22:10:59 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:17.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:17.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:19:17.941 00:19:17.941 --- 10.0.0.1 ping statistics --- 00:19:17.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.941 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:19:17.941 22:10:59 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:17.941 22:10:59 -- nvmf/common.sh@411 -- # return 0 00:19:17.941 22:10:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:17.941 22:10:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:17.941 22:10:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:17.941 22:10:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:17.941 22:10:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:17.941 22:10:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:17.941 22:10:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:17.941 22:10:59 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:17.941 22:10:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:17.941 22:10:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:17.941 22:10:59 -- common/autotest_common.sh@10 -- # set +x 00:19:17.941 22:10:59 -- nvmf/common.sh@470 -- # nvmfpid=3980078 00:19:17.941 22:10:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:17.941 22:10:59 -- nvmf/common.sh@471 -- # waitforlisten 3980078 00:19:17.941 22:10:59 -- common/autotest_common.sh@817 -- # '[' -z 3980078 ']' 00:19:17.941 22:10:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.941 22:10:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:17.941 22:10:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.941 22:10:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:17.941 22:10:59 -- common/autotest_common.sh@10 -- # set +x 00:19:17.941 [2024-04-24 22:10:59.984313] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:19:17.941 [2024-04-24 22:10:59.984418] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.941 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.941 [2024-04-24 22:11:00.067220] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:18.200 [2024-04-24 22:11:00.210698] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.200 [2024-04-24 22:11:00.210760] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.200 [2024-04-24 22:11:00.210776] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.200 [2024-04-24 22:11:00.210789] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.200 [2024-04-24 22:11:00.210801] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.200 [2024-04-24 22:11:00.210911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.200 [2024-04-24 22:11:00.210967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:18.200 [2024-04-24 22:11:00.211019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:18.200 [2024-04-24 22:11:00.211022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.200 22:11:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:18.200 22:11:00 -- common/autotest_common.sh@850 -- # return 0 00:19:18.200 22:11:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:18.200 22:11:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:18.200 22:11:00 -- common/autotest_common.sh@10 -- # set +x 00:19:18.200 22:11:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.200 22:11:00 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:18.200 22:11:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.200 22:11:00 -- common/autotest_common.sh@10 -- # set +x 00:19:18.200 [2024-04-24 22:11:00.384470] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.200 22:11:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.200 22:11:00 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:18.200 22:11:00 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:18.200 22:11:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:18.200 22:11:00 -- common/autotest_common.sh@10 -- # set +x 00:19:18.200 22:11:00 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:18.200 22:11:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:18.200 22:11:00 -- target/shutdown.sh@28 -- # cat 00:19:18.200 22:11:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:18.200 22:11:00 -- target/shutdown.sh@28 -- # cat 00:19:18.200 22:11:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:18.200 22:11:00 -- target/shutdown.sh@28 -- # cat 00:19:18.200 22:11:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:18.200 22:11:00 -- target/shutdown.sh@28 -- # cat 00:19:18.200 22:11:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:18.200 22:11:00 -- target/shutdown.sh@28 -- # cat 00:19:18.200 22:11:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:18.200 22:11:00 -- target/shutdown.sh@28 -- # cat 00:19:18.200 22:11:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:18.200 22:11:00 -- target/shutdown.sh@28 -- # cat 00:19:18.200 22:11:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:18.200 22:11:00 -- target/shutdown.sh@28 -- # cat 00:19:18.200 22:11:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:18.200 22:11:00 -- target/shutdown.sh@28 -- # cat 00:19:18.200 22:11:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:18.200 22:11:00 -- target/shutdown.sh@28 -- # cat 00:19:18.200 22:11:00 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:18.200 22:11:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.200 22:11:00 -- common/autotest_common.sh@10 -- # set +x 00:19:18.458 Malloc1 00:19:18.458 [2024-04-24 22:11:00.478525] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:18.458 [2024-04-24 22:11:00.478870] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.458 Malloc2 00:19:18.458 Malloc3 00:19:18.458 Malloc4 00:19:18.458 Malloc5 00:19:18.458 Malloc6 00:19:18.716 Malloc7 00:19:18.716 Malloc8 00:19:18.716 Malloc9 00:19:18.716 Malloc10 00:19:18.716 22:11:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.716 22:11:00 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:18.716 22:11:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:18.716 22:11:00 -- common/autotest_common.sh@10 -- # set +x 00:19:18.716 22:11:00 -- target/shutdown.sh@78 -- # perfpid=3980256 00:19:18.716 22:11:00 -- target/shutdown.sh@79 -- # waitforlisten 3980256 /var/tmp/bdevperf.sock 00:19:18.716 22:11:00 -- common/autotest_common.sh@817 -- # '[' -z 3980256 ']' 00:19:18.716 22:11:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:18.716 22:11:00 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:18.716 22:11:00 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:18.716 22:11:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:18.716 22:11:00 -- nvmf/common.sh@521 -- # config=() 00:19:18.716 22:11:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:18.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:18.716 22:11:00 -- nvmf/common.sh@521 -- # local subsystem config 00:19:18.716 22:11:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:18.716 22:11:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:18.716 22:11:00 -- common/autotest_common.sh@10 -- # set +x 00:19:18.716 22:11:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:18.716 { 00:19:18.716 "params": { 00:19:18.716 "name": "Nvme$subsystem", 00:19:18.716 "trtype": "$TEST_TRANSPORT", 00:19:18.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.716 "adrfam": "ipv4", 00:19:18.716 "trsvcid": "$NVMF_PORT", 00:19:18.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.716 "hdgst": ${hdgst:-false}, 00:19:18.716 "ddgst": ${ddgst:-false} 00:19:18.716 }, 00:19:18.716 "method": "bdev_nvme_attach_controller" 00:19:18.716 } 00:19:18.716 EOF 00:19:18.716 )") 00:19:18.716 22:11:00 -- nvmf/common.sh@543 -- # cat 00:19:18.974 22:11:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:18.974 22:11:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:18.974 { 00:19:18.974 "params": { 00:19:18.974 "name": "Nvme$subsystem", 00:19:18.974 "trtype": "$TEST_TRANSPORT", 00:19:18.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.974 "adrfam": "ipv4", 00:19:18.974 "trsvcid": "$NVMF_PORT", 00:19:18.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.974 "hdgst": ${hdgst:-false}, 00:19:18.974 "ddgst": ${ddgst:-false} 00:19:18.974 }, 00:19:18.974 "method": "bdev_nvme_attach_controller" 00:19:18.974 } 00:19:18.974 EOF 00:19:18.974 )") 00:19:18.974 22:11:00 -- nvmf/common.sh@543 -- # cat 00:19:18.974 22:11:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:18.975 22:11:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:18.975 { 00:19:18.975 "params": { 00:19:18.975 "name": "Nvme$subsystem", 00:19:18.975 "trtype": "$TEST_TRANSPORT", 00:19:18.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.975 "adrfam": "ipv4", 00:19:18.975 "trsvcid": "$NVMF_PORT", 00:19:18.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.975 "hdgst": ${hdgst:-false}, 00:19:18.975 "ddgst": ${ddgst:-false} 00:19:18.975 }, 00:19:18.975 "method": "bdev_nvme_attach_controller" 00:19:18.975 } 00:19:18.975 EOF 00:19:18.975 )") 00:19:18.975 22:11:00 -- nvmf/common.sh@543 -- # cat 00:19:18.975 22:11:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:18.975 22:11:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:18.975 { 00:19:18.975 "params": { 00:19:18.975 "name": "Nvme$subsystem", 00:19:18.975 "trtype": "$TEST_TRANSPORT", 00:19:18.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.975 "adrfam": "ipv4", 00:19:18.975 "trsvcid": "$NVMF_PORT", 00:19:18.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.975 "hdgst": ${hdgst:-false}, 00:19:18.975 "ddgst": ${ddgst:-false} 00:19:18.975 }, 00:19:18.975 "method": "bdev_nvme_attach_controller" 00:19:18.975 } 00:19:18.975 EOF 00:19:18.975 )") 00:19:18.975 22:11:00 -- nvmf/common.sh@543 -- # cat 00:19:18.975 22:11:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:18.975 22:11:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:18.975 { 00:19:18.975 "params": { 00:19:18.975 "name": "Nvme$subsystem", 00:19:18.975 "trtype": "$TEST_TRANSPORT", 00:19:18.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.975 "adrfam": "ipv4", 00:19:18.975 "trsvcid": "$NVMF_PORT", 00:19:18.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.975 "hdgst": ${hdgst:-false}, 00:19:18.975 "ddgst": ${ddgst:-false} 00:19:18.975 }, 00:19:18.975 "method": "bdev_nvme_attach_controller" 00:19:18.975 } 00:19:18.975 EOF 00:19:18.975 )") 00:19:18.975 22:11:00 -- nvmf/common.sh@543 -- # cat 00:19:18.975 22:11:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:18.975 22:11:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:18.975 { 00:19:18.975 "params": { 00:19:18.975 "name": "Nvme$subsystem", 00:19:18.975 "trtype": "$TEST_TRANSPORT", 00:19:18.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.975 "adrfam": "ipv4", 00:19:18.975 "trsvcid": "$NVMF_PORT", 00:19:18.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.975 "hdgst": ${hdgst:-false}, 00:19:18.975 "ddgst": ${ddgst:-false} 00:19:18.975 }, 00:19:18.975 "method": "bdev_nvme_attach_controller" 00:19:18.975 } 00:19:18.975 EOF 00:19:18.975 )") 00:19:18.975 22:11:00 -- nvmf/common.sh@543 -- # cat 00:19:18.975 22:11:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:18.975 22:11:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:18.975 { 00:19:18.975 "params": { 00:19:18.975 "name": "Nvme$subsystem", 00:19:18.975 "trtype": "$TEST_TRANSPORT", 00:19:18.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.975 "adrfam": "ipv4", 00:19:18.975 "trsvcid": "$NVMF_PORT", 00:19:18.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.975 "hdgst": ${hdgst:-false}, 00:19:18.975 "ddgst": ${ddgst:-false} 00:19:18.975 }, 00:19:18.975 "method": "bdev_nvme_attach_controller" 00:19:18.975 } 00:19:18.975 EOF 00:19:18.975 )") 00:19:18.975 22:11:00 -- nvmf/common.sh@543 -- # cat 00:19:18.975 22:11:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:18.975 22:11:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:18.975 { 00:19:18.975 "params": { 00:19:18.975 "name": "Nvme$subsystem", 00:19:18.975 "trtype": "$TEST_TRANSPORT", 00:19:18.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.975 "adrfam": "ipv4", 00:19:18.975 "trsvcid": "$NVMF_PORT", 00:19:18.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.975 "hdgst": ${hdgst:-false}, 00:19:18.975 "ddgst": ${ddgst:-false} 00:19:18.975 }, 00:19:18.975 "method": "bdev_nvme_attach_controller" 00:19:18.975 } 00:19:18.975 EOF 00:19:18.975 )") 00:19:18.975 22:11:00 -- nvmf/common.sh@543 -- # cat 00:19:18.975 22:11:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:18.975 22:11:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:18.975 { 00:19:18.975 "params": { 00:19:18.975 "name": "Nvme$subsystem", 00:19:18.975 "trtype": "$TEST_TRANSPORT", 00:19:18.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.975 "adrfam": "ipv4", 00:19:18.975 "trsvcid": "$NVMF_PORT", 00:19:18.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.975 "hdgst": ${hdgst:-false}, 00:19:18.975 "ddgst": ${ddgst:-false} 00:19:18.975 }, 00:19:18.975 "method": "bdev_nvme_attach_controller" 00:19:18.975 } 00:19:18.975 EOF 00:19:18.975 )") 00:19:18.975 22:11:00 -- nvmf/common.sh@543 -- # cat 00:19:18.975 22:11:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:18.975 22:11:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:18.975 { 00:19:18.975 "params": { 00:19:18.975 "name": "Nvme$subsystem", 00:19:18.975 "trtype": "$TEST_TRANSPORT", 00:19:18.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.975 "adrfam": "ipv4", 00:19:18.975 "trsvcid": "$NVMF_PORT", 00:19:18.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.975 "hdgst": ${hdgst:-false}, 00:19:18.975 "ddgst": ${ddgst:-false} 00:19:18.975 }, 00:19:18.975 "method": "bdev_nvme_attach_controller" 00:19:18.975 } 00:19:18.975 EOF 00:19:18.975 )") 00:19:18.975 22:11:01 -- nvmf/common.sh@543 -- # cat 00:19:18.975 22:11:01 -- nvmf/common.sh@545 -- # jq . 00:19:18.975 22:11:01 -- nvmf/common.sh@546 -- # IFS=, 00:19:18.975 22:11:01 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:18.975 "params": { 00:19:18.975 "name": "Nvme1", 00:19:18.975 "trtype": "tcp", 00:19:18.975 "traddr": "10.0.0.2", 00:19:18.975 "adrfam": "ipv4", 00:19:18.975 "trsvcid": "4420", 00:19:18.975 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:18.975 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:18.975 "hdgst": false, 00:19:18.975 "ddgst": false 00:19:18.975 }, 00:19:18.975 "method": "bdev_nvme_attach_controller" 00:19:18.975 },{ 00:19:18.975 "params": { 00:19:18.975 "name": "Nvme2", 00:19:18.975 "trtype": "tcp", 00:19:18.975 "traddr": "10.0.0.2", 00:19:18.975 "adrfam": "ipv4", 00:19:18.975 "trsvcid": "4420", 00:19:18.975 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:18.975 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:18.975 "hdgst": false, 00:19:18.975 "ddgst": false 00:19:18.975 }, 00:19:18.975 "method": "bdev_nvme_attach_controller" 00:19:18.975 },{ 00:19:18.975 "params": { 00:19:18.975 "name": "Nvme3", 00:19:18.975 "trtype": "tcp", 00:19:18.975 "traddr": "10.0.0.2", 00:19:18.975 "adrfam": "ipv4", 00:19:18.975 "trsvcid": "4420", 00:19:18.975 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:18.975 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:18.975 "hdgst": false, 00:19:18.975 "ddgst": false 00:19:18.975 }, 00:19:18.975 "method": "bdev_nvme_attach_controller" 00:19:18.975 },{ 00:19:18.975 "params": { 00:19:18.975 "name": "Nvme4", 00:19:18.975 "trtype": "tcp", 00:19:18.975 "traddr": "10.0.0.2", 00:19:18.975 "adrfam": "ipv4", 00:19:18.975 "trsvcid": "4420", 00:19:18.975 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:18.975 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:18.975 "hdgst": false, 00:19:18.975 "ddgst": false 00:19:18.975 }, 00:19:18.975 "method": "bdev_nvme_attach_controller" 00:19:18.975 },{ 00:19:18.975 "params": { 00:19:18.975 "name": "Nvme5", 00:19:18.975 "trtype": "tcp", 00:19:18.975 "traddr": "10.0.0.2", 00:19:18.975 "adrfam": "ipv4", 00:19:18.975 "trsvcid": "4420", 00:19:18.975 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:18.975 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:18.975 "hdgst": false, 00:19:18.975 "ddgst": false 00:19:18.975 }, 00:19:18.975 "method": "bdev_nvme_attach_controller" 00:19:18.975 },{ 00:19:18.975 "params": { 00:19:18.975 "name": "Nvme6", 00:19:18.975 "trtype": "tcp", 00:19:18.975 "traddr": "10.0.0.2", 00:19:18.975 "adrfam": "ipv4", 00:19:18.975 "trsvcid": "4420", 00:19:18.975 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:18.975 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:18.975 "hdgst": false, 00:19:18.975 "ddgst": false 00:19:18.975 }, 00:19:18.975 "method": "bdev_nvme_attach_controller" 00:19:18.975 },{ 00:19:18.975 "params": { 00:19:18.975 "name": "Nvme7", 00:19:18.975 "trtype": "tcp", 00:19:18.975 "traddr": "10.0.0.2", 00:19:18.975 "adrfam": "ipv4", 00:19:18.975 "trsvcid": "4420", 00:19:18.975 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:18.975 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:18.975 "hdgst": false, 00:19:18.975 "ddgst": false 00:19:18.975 }, 00:19:18.976 "method": "bdev_nvme_attach_controller" 00:19:18.976 },{ 00:19:18.976 "params": { 00:19:18.976 "name": "Nvme8", 00:19:18.976 "trtype": "tcp", 00:19:18.976 "traddr": "10.0.0.2", 00:19:18.976 "adrfam": "ipv4", 00:19:18.976 "trsvcid": "4420", 00:19:18.976 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:18.976 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:18.976 "hdgst": false, 00:19:18.976 "ddgst": false 00:19:18.976 }, 00:19:18.976 "method": "bdev_nvme_attach_controller" 00:19:18.976 },{ 00:19:18.976 "params": { 00:19:18.976 "name": "Nvme9", 00:19:18.976 "trtype": "tcp", 00:19:18.976 "traddr": "10.0.0.2", 00:19:18.976 "adrfam": "ipv4", 00:19:18.976 "trsvcid": "4420", 00:19:18.976 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:18.976 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:18.976 "hdgst": false, 00:19:18.976 "ddgst": false 00:19:18.976 }, 00:19:18.976 "method": "bdev_nvme_attach_controller" 00:19:18.976 },{ 00:19:18.976 "params": { 00:19:18.976 "name": "Nvme10", 00:19:18.976 "trtype": "tcp", 00:19:18.976 "traddr": "10.0.0.2", 00:19:18.976 "adrfam": "ipv4", 00:19:18.976 "trsvcid": "4420", 00:19:18.976 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:18.976 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:18.976 "hdgst": false, 00:19:18.976 "ddgst": false 00:19:18.976 }, 00:19:18.976 "method": "bdev_nvme_attach_controller" 00:19:18.976 }' 00:19:18.976 [2024-04-24 22:11:01.017073] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:19:18.976 [2024-04-24 22:11:01.017170] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:18.976 EAL: No free 2048 kB hugepages reported on node 1 00:19:18.976 [2024-04-24 22:11:01.094737] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.976 [2024-04-24 22:11:01.213827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.875 22:11:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:20.875 22:11:03 -- common/autotest_common.sh@850 -- # return 0 00:19:20.875 22:11:03 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:20.875 22:11:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.875 22:11:03 -- common/autotest_common.sh@10 -- # set +x 00:19:20.875 22:11:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.875 22:11:03 -- target/shutdown.sh@83 -- # kill -9 3980256 00:19:20.875 22:11:03 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:20.875 22:11:03 -- target/shutdown.sh@87 -- # sleep 1 00:19:22.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3980256 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:22.284 22:11:04 -- target/shutdown.sh@88 -- # kill -0 3980078 00:19:22.284 22:11:04 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:22.284 22:11:04 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:22.284 22:11:04 -- nvmf/common.sh@521 -- # config=() 00:19:22.284 22:11:04 -- nvmf/common.sh@521 -- # local subsystem config 00:19:22.284 22:11:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:22.284 22:11:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:22.284 { 00:19:22.284 "params": { 00:19:22.284 "name": "Nvme$subsystem", 00:19:22.284 "trtype": "$TEST_TRANSPORT", 00:19:22.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:22.284 "adrfam": "ipv4", 00:19:22.284 "trsvcid": "$NVMF_PORT", 00:19:22.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:22.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:22.285 "hdgst": ${hdgst:-false}, 00:19:22.285 "ddgst": ${ddgst:-false} 00:19:22.285 }, 00:19:22.285 "method": "bdev_nvme_attach_controller" 00:19:22.285 } 00:19:22.285 EOF 00:19:22.285 )") 00:19:22.285 22:11:04 -- nvmf/common.sh@543 -- # cat 00:19:22.285 22:11:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:22.285 22:11:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:22.285 { 00:19:22.285 "params": { 00:19:22.285 "name": "Nvme$subsystem", 00:19:22.285 "trtype": "$TEST_TRANSPORT", 00:19:22.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:22.285 "adrfam": "ipv4", 00:19:22.285 "trsvcid": "$NVMF_PORT", 00:19:22.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:22.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:22.285 "hdgst": ${hdgst:-false}, 00:19:22.285 "ddgst": ${ddgst:-false} 00:19:22.285 }, 00:19:22.285 "method": "bdev_nvme_attach_controller" 00:19:22.285 } 00:19:22.285 EOF 00:19:22.285 )") 00:19:22.285 22:11:04 -- nvmf/common.sh@543 -- # cat 00:19:22.285 22:11:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:22.285 22:11:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:22.285 { 00:19:22.285 "params": { 00:19:22.285 "name": "Nvme$subsystem", 00:19:22.285 "trtype": "$TEST_TRANSPORT", 00:19:22.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:22.285 "adrfam": "ipv4", 00:19:22.285 "trsvcid": "$NVMF_PORT", 00:19:22.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:22.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:22.285 "hdgst": ${hdgst:-false}, 00:19:22.285 "ddgst": ${ddgst:-false} 00:19:22.285 }, 00:19:22.285 "method": "bdev_nvme_attach_controller" 00:19:22.285 } 00:19:22.285 EOF 00:19:22.285 )") 00:19:22.285 22:11:04 -- nvmf/common.sh@543 -- # cat 00:19:22.285 22:11:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:22.285 22:11:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:22.285 { 00:19:22.285 "params": { 00:19:22.285 "name": "Nvme$subsystem", 00:19:22.285 "trtype": "$TEST_TRANSPORT", 00:19:22.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:22.285 "adrfam": "ipv4", 00:19:22.285 "trsvcid": "$NVMF_PORT", 00:19:22.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:22.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:22.285 "hdgst": ${hdgst:-false}, 00:19:22.285 "ddgst": ${ddgst:-false} 00:19:22.285 }, 00:19:22.285 "method": "bdev_nvme_attach_controller" 00:19:22.285 } 00:19:22.285 EOF 00:19:22.285 )") 00:19:22.285 22:11:04 -- nvmf/common.sh@543 -- # cat 00:19:22.285 22:11:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:22.285 22:11:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:22.285 { 00:19:22.285 "params": { 00:19:22.285 "name": "Nvme$subsystem", 00:19:22.285 "trtype": "$TEST_TRANSPORT", 00:19:22.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:22.285 "adrfam": "ipv4", 00:19:22.285 "trsvcid": "$NVMF_PORT", 00:19:22.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:22.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:22.285 "hdgst": ${hdgst:-false}, 00:19:22.285 "ddgst": ${ddgst:-false} 00:19:22.285 }, 00:19:22.285 "method": "bdev_nvme_attach_controller" 00:19:22.285 } 00:19:22.285 EOF 00:19:22.285 )") 00:19:22.285 22:11:04 -- nvmf/common.sh@543 -- # cat 00:19:22.285 22:11:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:22.285 22:11:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:22.285 { 00:19:22.285 "params": { 00:19:22.285 "name": "Nvme$subsystem", 00:19:22.285 "trtype": "$TEST_TRANSPORT", 00:19:22.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:22.285 "adrfam": "ipv4", 00:19:22.285 "trsvcid": "$NVMF_PORT", 00:19:22.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:22.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:22.285 "hdgst": ${hdgst:-false}, 00:19:22.285 "ddgst": ${ddgst:-false} 00:19:22.285 }, 00:19:22.285 "method": "bdev_nvme_attach_controller" 00:19:22.285 } 00:19:22.285 EOF 00:19:22.285 )") 00:19:22.285 22:11:04 -- nvmf/common.sh@543 -- # cat 00:19:22.285 22:11:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:22.285 22:11:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:22.285 { 00:19:22.285 "params": { 00:19:22.285 "name": "Nvme$subsystem", 00:19:22.285 "trtype": "$TEST_TRANSPORT", 00:19:22.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:22.285 "adrfam": "ipv4", 00:19:22.285 "trsvcid": "$NVMF_PORT", 00:19:22.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:22.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:22.285 "hdgst": ${hdgst:-false}, 00:19:22.285 "ddgst": ${ddgst:-false} 00:19:22.285 }, 00:19:22.285 "method": "bdev_nvme_attach_controller" 00:19:22.285 } 00:19:22.285 EOF 00:19:22.285 )") 00:19:22.285 22:11:04 -- nvmf/common.sh@543 -- # cat 00:19:22.285 22:11:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:22.285 22:11:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:22.285 { 00:19:22.285 "params": { 00:19:22.285 "name": "Nvme$subsystem", 00:19:22.285 "trtype": "$TEST_TRANSPORT", 00:19:22.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:22.285 "adrfam": "ipv4", 00:19:22.285 "trsvcid": "$NVMF_PORT", 00:19:22.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:22.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:22.285 "hdgst": ${hdgst:-false}, 00:19:22.285 "ddgst": ${ddgst:-false} 00:19:22.285 }, 00:19:22.285 "method": "bdev_nvme_attach_controller" 00:19:22.285 } 00:19:22.285 EOF 00:19:22.285 )") 00:19:22.285 22:11:04 -- nvmf/common.sh@543 -- # cat 00:19:22.285 22:11:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:22.285 22:11:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:22.285 { 00:19:22.285 "params": { 00:19:22.285 "name": "Nvme$subsystem", 00:19:22.285 "trtype": "$TEST_TRANSPORT", 00:19:22.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:22.285 "adrfam": "ipv4", 00:19:22.285 "trsvcid": "$NVMF_PORT", 00:19:22.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:22.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:22.285 "hdgst": ${hdgst:-false}, 00:19:22.285 "ddgst": ${ddgst:-false} 00:19:22.285 }, 00:19:22.285 "method": "bdev_nvme_attach_controller" 00:19:22.285 } 00:19:22.285 EOF 00:19:22.285 )") 00:19:22.285 22:11:04 -- nvmf/common.sh@543 -- # cat 00:19:22.285 22:11:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:22.285 22:11:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:22.285 { 00:19:22.285 "params": { 00:19:22.285 "name": "Nvme$subsystem", 00:19:22.285 "trtype": "$TEST_TRANSPORT", 00:19:22.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:22.285 "adrfam": "ipv4", 00:19:22.285 "trsvcid": "$NVMF_PORT", 00:19:22.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:22.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:22.285 "hdgst": ${hdgst:-false}, 00:19:22.285 "ddgst": ${ddgst:-false} 00:19:22.285 }, 00:19:22.286 "method": "bdev_nvme_attach_controller" 00:19:22.286 } 00:19:22.286 EOF 00:19:22.286 )") 00:19:22.286 22:11:04 -- nvmf/common.sh@543 -- # cat 00:19:22.286 22:11:04 -- nvmf/common.sh@545 -- # jq . 00:19:22.286 22:11:04 -- nvmf/common.sh@546 -- # IFS=, 00:19:22.286 22:11:04 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:22.286 "params": { 00:19:22.286 "name": "Nvme1", 00:19:22.286 "trtype": "tcp", 00:19:22.286 "traddr": "10.0.0.2", 00:19:22.286 "adrfam": "ipv4", 00:19:22.286 "trsvcid": "4420", 00:19:22.286 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.286 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:22.286 "hdgst": false, 00:19:22.286 "ddgst": false 00:19:22.286 }, 00:19:22.286 "method": "bdev_nvme_attach_controller" 00:19:22.286 },{ 00:19:22.286 "params": { 00:19:22.286 "name": "Nvme2", 00:19:22.286 "trtype": "tcp", 00:19:22.286 "traddr": "10.0.0.2", 00:19:22.286 "adrfam": "ipv4", 00:19:22.286 "trsvcid": "4420", 00:19:22.286 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:22.286 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:22.286 "hdgst": false, 00:19:22.286 "ddgst": false 00:19:22.286 }, 00:19:22.286 "method": "bdev_nvme_attach_controller" 00:19:22.286 },{ 00:19:22.286 "params": { 00:19:22.286 "name": "Nvme3", 00:19:22.286 "trtype": "tcp", 00:19:22.286 "traddr": "10.0.0.2", 00:19:22.286 "adrfam": "ipv4", 00:19:22.286 "trsvcid": "4420", 00:19:22.286 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:22.286 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:22.286 "hdgst": false, 00:19:22.286 "ddgst": false 00:19:22.286 }, 00:19:22.286 "method": "bdev_nvme_attach_controller" 00:19:22.286 },{ 00:19:22.286 "params": { 00:19:22.286 "name": "Nvme4", 00:19:22.286 "trtype": "tcp", 00:19:22.286 "traddr": "10.0.0.2", 00:19:22.286 "adrfam": "ipv4", 00:19:22.286 "trsvcid": "4420", 00:19:22.286 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:22.286 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:22.286 "hdgst": false, 00:19:22.286 "ddgst": false 00:19:22.286 }, 00:19:22.286 "method": "bdev_nvme_attach_controller" 00:19:22.286 },{ 00:19:22.286 "params": { 00:19:22.286 "name": "Nvme5", 00:19:22.286 "trtype": "tcp", 00:19:22.286 "traddr": "10.0.0.2", 00:19:22.286 "adrfam": "ipv4", 00:19:22.286 "trsvcid": "4420", 00:19:22.286 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:22.286 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:22.286 "hdgst": false, 00:19:22.286 "ddgst": false 00:19:22.286 }, 00:19:22.286 "method": "bdev_nvme_attach_controller" 00:19:22.286 },{ 00:19:22.286 "params": { 00:19:22.286 "name": "Nvme6", 00:19:22.286 "trtype": "tcp", 00:19:22.286 "traddr": "10.0.0.2", 00:19:22.286 "adrfam": "ipv4", 00:19:22.286 "trsvcid": "4420", 00:19:22.286 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:22.286 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:22.286 "hdgst": false, 00:19:22.286 "ddgst": false 00:19:22.286 }, 00:19:22.286 "method": "bdev_nvme_attach_controller" 00:19:22.286 },{ 00:19:22.286 "params": { 00:19:22.286 "name": "Nvme7", 00:19:22.286 "trtype": "tcp", 00:19:22.286 "traddr": "10.0.0.2", 00:19:22.286 "adrfam": "ipv4", 00:19:22.286 "trsvcid": "4420", 00:19:22.286 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:22.286 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:22.286 "hdgst": false, 00:19:22.286 "ddgst": false 00:19:22.286 }, 00:19:22.286 "method": "bdev_nvme_attach_controller" 00:19:22.286 },{ 00:19:22.286 "params": { 00:19:22.286 "name": "Nvme8", 00:19:22.286 "trtype": "tcp", 00:19:22.286 "traddr": "10.0.0.2", 00:19:22.286 "adrfam": "ipv4", 00:19:22.286 "trsvcid": "4420", 00:19:22.286 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:22.286 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:22.286 "hdgst": false, 00:19:22.286 "ddgst": false 00:19:22.286 }, 00:19:22.286 "method": "bdev_nvme_attach_controller" 00:19:22.286 },{ 00:19:22.286 "params": { 00:19:22.286 "name": "Nvme9", 00:19:22.286 "trtype": "tcp", 00:19:22.286 "traddr": "10.0.0.2", 00:19:22.286 "adrfam": "ipv4", 00:19:22.286 "trsvcid": "4420", 00:19:22.286 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:22.286 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:22.286 "hdgst": false, 00:19:22.286 "ddgst": false 00:19:22.286 }, 00:19:22.286 "method": "bdev_nvme_attach_controller" 00:19:22.286 },{ 00:19:22.286 "params": { 00:19:22.286 "name": "Nvme10", 00:19:22.286 "trtype": "tcp", 00:19:22.286 "traddr": "10.0.0.2", 00:19:22.286 "adrfam": "ipv4", 00:19:22.286 "trsvcid": "4420", 00:19:22.286 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:22.286 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:22.286 "hdgst": false, 00:19:22.286 "ddgst": false 00:19:22.286 }, 00:19:22.286 "method": "bdev_nvme_attach_controller" 00:19:22.286 }' 00:19:22.286 [2024-04-24 22:11:04.164408] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:19:22.286 [2024-04-24 22:11:04.164507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3980560 ] 00:19:22.286 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.286 [2024-04-24 22:11:04.242443] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.286 [2024-04-24 22:11:04.363476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.660 Running I/O for 1 seconds... 00:19:25.035 00:19:25.035 Latency(us) 00:19:25.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.035 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:25.035 Verification LBA range: start 0x0 length 0x400 00:19:25.035 Nvme1n1 : 1.11 187.46 11.72 0.00 0.00 319265.35 28156.21 315349.52 00:19:25.035 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:25.035 Verification LBA range: start 0x0 length 0x400 00:19:25.035 Nvme2n1 : 1.21 212.01 13.25 0.00 0.00 293781.81 25437.68 329330.54 00:19:25.035 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:25.035 Verification LBA range: start 0x0 length 0x400 00:19:25.035 Nvme3n1 : 1.12 171.62 10.73 0.00 0.00 355496.64 19515.16 307582.29 00:19:25.035 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:25.035 Verification LBA range: start 0x0 length 0x400 00:19:25.035 Nvme4n1 : 1.22 210.59 13.16 0.00 0.00 284619.66 23981.32 330883.98 00:19:25.035 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:25.035 Verification LBA range: start 0x0 length 0x400 00:19:25.035 Nvme5n1 : 1.15 166.61 10.41 0.00 0.00 353110.09 23495.87 354185.67 00:19:25.035 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:25.035 Verification LBA range: start 0x0 length 0x400 00:19:25.035 Nvme6n1 : 1.17 164.40 10.28 0.00 0.00 351407.98 25049.32 335544.32 00:19:25.035 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:25.035 Verification LBA range: start 0x0 length 0x400 00:19:25.035 Nvme7n1 : 1.22 209.63 13.10 0.00 0.00 271914.86 19126.80 332437.43 00:19:25.035 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:25.035 Verification LBA range: start 0x0 length 0x400 00:19:25.035 Nvme8n1 : 1.23 208.56 13.04 0.00 0.00 268415.62 19806.44 293601.28 00:19:25.035 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:25.035 Verification LBA range: start 0x0 length 0x400 00:19:25.035 Nvme9n1 : 1.17 163.43 10.21 0.00 0.00 333977.22 33399.09 337097.77 00:19:25.035 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:25.035 Verification LBA range: start 0x0 length 0x400 00:19:25.035 Nvme10n1 : 1.24 206.73 12.92 0.00 0.00 261469.96 9903.22 365059.79 00:19:25.035 =================================================================================================================== 00:19:25.035 Total : 1901.05 118.82 0.00 0.00 304692.08 9903.22 365059.79 00:19:25.293 22:11:07 -- target/shutdown.sh@94 -- # stoptarget 00:19:25.293 22:11:07 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:25.293 22:11:07 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:25.293 22:11:07 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:25.293 22:11:07 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:25.293 22:11:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:25.293 22:11:07 -- nvmf/common.sh@117 -- # sync 00:19:25.293 22:11:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:25.293 22:11:07 -- nvmf/common.sh@120 -- # set +e 00:19:25.293 22:11:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:25.293 22:11:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:25.293 rmmod nvme_tcp 00:19:25.293 rmmod nvme_fabrics 00:19:25.293 rmmod nvme_keyring 00:19:25.293 22:11:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:25.293 22:11:07 -- nvmf/common.sh@124 -- # set -e 00:19:25.293 22:11:07 -- nvmf/common.sh@125 -- # return 0 00:19:25.293 22:11:07 -- nvmf/common.sh@478 -- # '[' -n 3980078 ']' 00:19:25.293 22:11:07 -- nvmf/common.sh@479 -- # killprocess 3980078 00:19:25.293 22:11:07 -- common/autotest_common.sh@936 -- # '[' -z 3980078 ']' 00:19:25.293 22:11:07 -- common/autotest_common.sh@940 -- # kill -0 3980078 00:19:25.293 22:11:07 -- common/autotest_common.sh@941 -- # uname 00:19:25.293 22:11:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:25.293 22:11:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3980078 00:19:25.293 22:11:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:25.293 22:11:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:25.293 22:11:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3980078' 00:19:25.293 killing process with pid 3980078 00:19:25.293 22:11:07 -- common/autotest_common.sh@955 -- # kill 3980078 00:19:25.293 [2024-04-24 22:11:07.525798] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:25.293 22:11:07 -- common/autotest_common.sh@960 -- # wait 3980078 00:19:25.859 22:11:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:25.859 22:11:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:25.859 22:11:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:25.859 22:11:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:25.859 22:11:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:25.859 22:11:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.859 22:11:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:25.859 22:11:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.389 22:11:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:28.389 00:19:28.389 real 0m12.669s 00:19:28.389 user 0m36.758s 00:19:28.389 sys 0m3.525s 00:19:28.389 22:11:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:28.389 22:11:10 -- common/autotest_common.sh@10 -- # set +x 00:19:28.389 ************************************ 00:19:28.389 END TEST nvmf_shutdown_tc1 00:19:28.389 ************************************ 00:19:28.389 22:11:10 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:28.389 22:11:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:28.389 22:11:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:28.389 22:11:10 -- common/autotest_common.sh@10 -- # set +x 00:19:28.389 ************************************ 00:19:28.389 START TEST nvmf_shutdown_tc2 00:19:28.389 ************************************ 00:19:28.389 22:11:10 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:19:28.389 22:11:10 -- target/shutdown.sh@99 -- # starttarget 00:19:28.389 22:11:10 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:28.389 22:11:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:28.389 22:11:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:28.389 22:11:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:28.389 22:11:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:28.389 22:11:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:28.389 22:11:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.389 22:11:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:28.389 22:11:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.389 22:11:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:28.389 22:11:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:28.389 22:11:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:28.389 22:11:10 -- common/autotest_common.sh@10 -- # set +x 00:19:28.389 22:11:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:28.389 22:11:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:28.389 22:11:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:28.389 22:11:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:28.389 22:11:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:28.389 22:11:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:28.389 22:11:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:28.389 22:11:10 -- nvmf/common.sh@295 -- # net_devs=() 00:19:28.389 22:11:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:28.389 22:11:10 -- nvmf/common.sh@296 -- # e810=() 00:19:28.389 22:11:10 -- nvmf/common.sh@296 -- # local -ga e810 00:19:28.389 22:11:10 -- nvmf/common.sh@297 -- # x722=() 00:19:28.389 22:11:10 -- nvmf/common.sh@297 -- # local -ga x722 00:19:28.389 22:11:10 -- nvmf/common.sh@298 -- # mlx=() 00:19:28.389 22:11:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:28.389 22:11:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:28.389 22:11:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:28.389 22:11:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:28.389 22:11:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:28.389 22:11:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:28.389 22:11:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:28.389 22:11:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:28.389 22:11:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:28.389 22:11:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:28.389 22:11:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:28.389 22:11:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:28.389 22:11:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:28.390 22:11:10 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:28.390 22:11:10 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:28.390 22:11:10 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:28.390 22:11:10 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:28.390 22:11:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:28.390 22:11:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:28.390 22:11:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:28.390 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:28.390 22:11:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:28.390 22:11:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:28.390 22:11:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.390 22:11:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.390 22:11:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:28.390 22:11:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:28.390 22:11:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:28.390 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:28.390 22:11:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:28.390 22:11:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:28.390 22:11:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.390 22:11:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.390 22:11:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:28.390 22:11:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:28.390 22:11:10 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:28.390 22:11:10 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:28.390 22:11:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:28.390 22:11:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.390 22:11:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:28.390 22:11:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.390 22:11:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:28.390 Found net devices under 0000:84:00.0: cvl_0_0 00:19:28.390 22:11:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.390 22:11:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:28.390 22:11:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.390 22:11:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:28.390 22:11:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.390 22:11:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:28.390 Found net devices under 0000:84:00.1: cvl_0_1 00:19:28.390 22:11:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.390 22:11:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:28.390 22:11:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:28.390 22:11:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:28.390 22:11:10 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:28.390 22:11:10 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:28.390 22:11:10 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:28.390 22:11:10 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:28.390 22:11:10 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:28.390 22:11:10 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:28.390 22:11:10 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:28.390 22:11:10 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:28.390 22:11:10 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:28.390 22:11:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:28.390 22:11:10 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:28.390 22:11:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:28.390 22:11:10 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:28.390 22:11:10 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:28.390 22:11:10 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:28.390 22:11:10 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:28.390 22:11:10 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:28.390 22:11:10 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:28.390 22:11:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:28.390 22:11:10 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:28.390 22:11:10 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:28.390 22:11:10 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:28.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:28.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:19:28.390 00:19:28.390 --- 10.0.0.2 ping statistics --- 00:19:28.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.390 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:19:28.390 22:11:10 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:28.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:28.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:19:28.390 00:19:28.390 --- 10.0.0.1 ping statistics --- 00:19:28.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.390 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:19:28.390 22:11:10 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:28.390 22:11:10 -- nvmf/common.sh@411 -- # return 0 00:19:28.390 22:11:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:28.390 22:11:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:28.390 22:11:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:28.390 22:11:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:28.390 22:11:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:28.390 22:11:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:28.390 22:11:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:28.390 22:11:10 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:28.390 22:11:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:28.390 22:11:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:28.390 22:11:10 -- common/autotest_common.sh@10 -- # set +x 00:19:28.390 22:11:10 -- nvmf/common.sh@470 -- # nvmfpid=3981454 00:19:28.390 22:11:10 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:28.390 22:11:10 -- nvmf/common.sh@471 -- # waitforlisten 3981454 00:19:28.390 22:11:10 -- common/autotest_common.sh@817 -- # '[' -z 3981454 ']' 00:19:28.390 22:11:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.390 22:11:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:28.390 22:11:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.390 22:11:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:28.390 22:11:10 -- common/autotest_common.sh@10 -- # set +x 00:19:28.390 [2024-04-24 22:11:10.557961] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:19:28.390 [2024-04-24 22:11:10.558066] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.390 EAL: No free 2048 kB hugepages reported on node 1 00:19:28.648 [2024-04-24 22:11:10.647388] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:28.648 [2024-04-24 22:11:10.784564] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.648 [2024-04-24 22:11:10.784632] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.648 [2024-04-24 22:11:10.784669] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:28.648 [2024-04-24 22:11:10.784686] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:28.648 [2024-04-24 22:11:10.784701] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.648 [2024-04-24 22:11:10.784815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.648 [2024-04-24 22:11:10.784870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:28.648 [2024-04-24 22:11:10.784923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:28.648 [2024-04-24 22:11:10.784926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.907 22:11:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:28.907 22:11:10 -- common/autotest_common.sh@850 -- # return 0 00:19:28.907 22:11:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:28.907 22:11:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:28.907 22:11:10 -- common/autotest_common.sh@10 -- # set +x 00:19:28.907 22:11:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.907 22:11:10 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:28.907 22:11:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.907 22:11:10 -- common/autotest_common.sh@10 -- # set +x 00:19:28.907 [2024-04-24 22:11:10.952487] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:28.907 22:11:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.907 22:11:10 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:28.907 22:11:10 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:28.907 22:11:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:28.907 22:11:10 -- common/autotest_common.sh@10 -- # set +x 00:19:28.907 22:11:10 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:28.907 22:11:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:28.907 22:11:10 -- target/shutdown.sh@28 -- # cat 00:19:28.907 22:11:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:28.907 22:11:10 -- target/shutdown.sh@28 -- # cat 00:19:28.907 22:11:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:28.907 22:11:10 -- target/shutdown.sh@28 -- # cat 00:19:28.907 22:11:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:28.907 22:11:10 -- target/shutdown.sh@28 -- # cat 00:19:28.907 22:11:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:28.907 22:11:10 -- target/shutdown.sh@28 -- # cat 00:19:28.907 22:11:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:28.907 22:11:10 -- target/shutdown.sh@28 -- # cat 00:19:28.907 22:11:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:28.907 22:11:10 -- target/shutdown.sh@28 -- # cat 00:19:28.907 22:11:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:28.907 22:11:10 -- target/shutdown.sh@28 -- # cat 00:19:28.907 22:11:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:28.907 22:11:10 -- target/shutdown.sh@28 -- # cat 00:19:28.907 22:11:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:28.907 22:11:10 -- target/shutdown.sh@28 -- # cat 00:19:28.907 22:11:10 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:28.907 22:11:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.907 22:11:10 -- common/autotest_common.sh@10 -- # set +x 00:19:28.907 Malloc1 00:19:28.907 [2024-04-24 22:11:11.037244] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:28.907 [2024-04-24 22:11:11.037601] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:28.907 Malloc2 00:19:28.907 Malloc3 00:19:28.907 Malloc4 00:19:29.165 Malloc5 00:19:29.165 Malloc6 00:19:29.165 Malloc7 00:19:29.165 Malloc8 00:19:29.165 Malloc9 00:19:29.423 Malloc10 00:19:29.423 22:11:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.423 22:11:11 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:29.423 22:11:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:29.423 22:11:11 -- common/autotest_common.sh@10 -- # set +x 00:19:29.423 22:11:11 -- target/shutdown.sh@103 -- # perfpid=3981635 00:19:29.423 22:11:11 -- target/shutdown.sh@104 -- # waitforlisten 3981635 /var/tmp/bdevperf.sock 00:19:29.423 22:11:11 -- common/autotest_common.sh@817 -- # '[' -z 3981635 ']' 00:19:29.423 22:11:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:29.423 22:11:11 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:29.423 22:11:11 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:29.423 22:11:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:29.423 22:11:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:29.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:29.423 22:11:11 -- nvmf/common.sh@521 -- # config=() 00:19:29.423 22:11:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:29.423 22:11:11 -- nvmf/common.sh@521 -- # local subsystem config 00:19:29.423 22:11:11 -- common/autotest_common.sh@10 -- # set +x 00:19:29.423 22:11:11 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:29.423 22:11:11 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:29.423 { 00:19:29.423 "params": { 00:19:29.423 "name": "Nvme$subsystem", 00:19:29.423 "trtype": "$TEST_TRANSPORT", 00:19:29.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.423 "adrfam": "ipv4", 00:19:29.423 "trsvcid": "$NVMF_PORT", 00:19:29.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.424 "hdgst": ${hdgst:-false}, 00:19:29.424 "ddgst": ${ddgst:-false} 00:19:29.424 }, 00:19:29.424 "method": "bdev_nvme_attach_controller" 00:19:29.424 } 00:19:29.424 EOF 00:19:29.424 )") 00:19:29.424 22:11:11 -- nvmf/common.sh@543 -- # cat 00:19:29.424 22:11:11 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:29.424 22:11:11 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:29.424 { 00:19:29.424 "params": { 00:19:29.424 "name": "Nvme$subsystem", 00:19:29.424 "trtype": "$TEST_TRANSPORT", 00:19:29.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.424 "adrfam": "ipv4", 00:19:29.424 "trsvcid": "$NVMF_PORT", 00:19:29.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.424 "hdgst": ${hdgst:-false}, 00:19:29.424 "ddgst": ${ddgst:-false} 00:19:29.424 }, 00:19:29.424 "method": "bdev_nvme_attach_controller" 00:19:29.424 } 00:19:29.424 EOF 00:19:29.424 )") 00:19:29.424 22:11:11 -- nvmf/common.sh@543 -- # cat 00:19:29.424 22:11:11 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:29.424 22:11:11 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:29.424 { 00:19:29.424 "params": { 00:19:29.424 "name": "Nvme$subsystem", 00:19:29.424 "trtype": "$TEST_TRANSPORT", 00:19:29.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.424 "adrfam": "ipv4", 00:19:29.424 "trsvcid": "$NVMF_PORT", 00:19:29.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.424 "hdgst": ${hdgst:-false}, 00:19:29.424 "ddgst": ${ddgst:-false} 00:19:29.424 }, 00:19:29.424 "method": "bdev_nvme_attach_controller" 00:19:29.424 } 00:19:29.424 EOF 00:19:29.424 )") 00:19:29.424 22:11:11 -- nvmf/common.sh@543 -- # cat 00:19:29.424 22:11:11 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:29.424 22:11:11 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:29.424 { 00:19:29.424 "params": { 00:19:29.424 "name": "Nvme$subsystem", 00:19:29.424 "trtype": "$TEST_TRANSPORT", 00:19:29.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.424 "adrfam": "ipv4", 00:19:29.424 "trsvcid": "$NVMF_PORT", 00:19:29.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.424 "hdgst": ${hdgst:-false}, 00:19:29.424 "ddgst": ${ddgst:-false} 00:19:29.424 }, 00:19:29.424 "method": "bdev_nvme_attach_controller" 00:19:29.424 } 00:19:29.424 EOF 00:19:29.424 )") 00:19:29.424 22:11:11 -- nvmf/common.sh@543 -- # cat 00:19:29.424 22:11:11 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:29.424 22:11:11 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:29.424 { 00:19:29.424 "params": { 00:19:29.424 "name": "Nvme$subsystem", 00:19:29.424 "trtype": "$TEST_TRANSPORT", 00:19:29.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.424 "adrfam": "ipv4", 00:19:29.424 "trsvcid": "$NVMF_PORT", 00:19:29.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.424 "hdgst": ${hdgst:-false}, 00:19:29.424 "ddgst": ${ddgst:-false} 00:19:29.424 }, 00:19:29.424 "method": "bdev_nvme_attach_controller" 00:19:29.424 } 00:19:29.424 EOF 00:19:29.424 )") 00:19:29.424 22:11:11 -- nvmf/common.sh@543 -- # cat 00:19:29.424 22:11:11 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:29.424 22:11:11 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:29.424 { 00:19:29.424 "params": { 00:19:29.424 "name": "Nvme$subsystem", 00:19:29.424 "trtype": "$TEST_TRANSPORT", 00:19:29.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.424 "adrfam": "ipv4", 00:19:29.424 "trsvcid": "$NVMF_PORT", 00:19:29.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.424 "hdgst": ${hdgst:-false}, 00:19:29.424 "ddgst": ${ddgst:-false} 00:19:29.424 }, 00:19:29.424 "method": "bdev_nvme_attach_controller" 00:19:29.424 } 00:19:29.424 EOF 00:19:29.424 )") 00:19:29.424 22:11:11 -- nvmf/common.sh@543 -- # cat 00:19:29.424 22:11:11 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:29.424 22:11:11 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:29.424 { 00:19:29.424 "params": { 00:19:29.424 "name": "Nvme$subsystem", 00:19:29.424 "trtype": "$TEST_TRANSPORT", 00:19:29.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.424 "adrfam": "ipv4", 00:19:29.424 "trsvcid": "$NVMF_PORT", 00:19:29.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.424 "hdgst": ${hdgst:-false}, 00:19:29.424 "ddgst": ${ddgst:-false} 00:19:29.424 }, 00:19:29.424 "method": "bdev_nvme_attach_controller" 00:19:29.424 } 00:19:29.424 EOF 00:19:29.424 )") 00:19:29.424 22:11:11 -- nvmf/common.sh@543 -- # cat 00:19:29.424 22:11:11 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:29.424 22:11:11 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:29.424 { 00:19:29.424 "params": { 00:19:29.424 "name": "Nvme$subsystem", 00:19:29.424 "trtype": "$TEST_TRANSPORT", 00:19:29.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.424 "adrfam": "ipv4", 00:19:29.424 "trsvcid": "$NVMF_PORT", 00:19:29.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.424 "hdgst": ${hdgst:-false}, 00:19:29.424 "ddgst": ${ddgst:-false} 00:19:29.424 }, 00:19:29.424 "method": "bdev_nvme_attach_controller" 00:19:29.424 } 00:19:29.424 EOF 00:19:29.424 )") 00:19:29.424 22:11:11 -- nvmf/common.sh@543 -- # cat 00:19:29.424 22:11:11 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:29.424 22:11:11 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:29.424 { 00:19:29.424 "params": { 00:19:29.424 "name": "Nvme$subsystem", 00:19:29.424 "trtype": "$TEST_TRANSPORT", 00:19:29.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.424 "adrfam": "ipv4", 00:19:29.424 "trsvcid": "$NVMF_PORT", 00:19:29.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.424 "hdgst": ${hdgst:-false}, 00:19:29.424 "ddgst": ${ddgst:-false} 00:19:29.424 }, 00:19:29.424 "method": "bdev_nvme_attach_controller" 00:19:29.424 } 00:19:29.424 EOF 00:19:29.424 )") 00:19:29.424 22:11:11 -- nvmf/common.sh@543 -- # cat 00:19:29.424 22:11:11 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:29.424 22:11:11 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:29.424 { 00:19:29.424 "params": { 00:19:29.424 "name": "Nvme$subsystem", 00:19:29.424 "trtype": "$TEST_TRANSPORT", 00:19:29.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.424 "adrfam": "ipv4", 00:19:29.424 "trsvcid": "$NVMF_PORT", 00:19:29.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.424 "hdgst": ${hdgst:-false}, 00:19:29.424 "ddgst": ${ddgst:-false} 00:19:29.424 }, 00:19:29.424 "method": "bdev_nvme_attach_controller" 00:19:29.424 } 00:19:29.424 EOF 00:19:29.424 )") 00:19:29.424 22:11:11 -- nvmf/common.sh@543 -- # cat 00:19:29.424 22:11:11 -- nvmf/common.sh@545 -- # jq . 00:19:29.424 22:11:11 -- nvmf/common.sh@546 -- # IFS=, 00:19:29.424 22:11:11 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:29.424 "params": { 00:19:29.424 "name": "Nvme1", 00:19:29.424 "trtype": "tcp", 00:19:29.424 "traddr": "10.0.0.2", 00:19:29.424 "adrfam": "ipv4", 00:19:29.424 "trsvcid": "4420", 00:19:29.424 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.424 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:29.424 "hdgst": false, 00:19:29.424 "ddgst": false 00:19:29.424 }, 00:19:29.424 "method": "bdev_nvme_attach_controller" 00:19:29.424 },{ 00:19:29.424 "params": { 00:19:29.424 "name": "Nvme2", 00:19:29.424 "trtype": "tcp", 00:19:29.424 "traddr": "10.0.0.2", 00:19:29.424 "adrfam": "ipv4", 00:19:29.424 "trsvcid": "4420", 00:19:29.424 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:29.424 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:29.424 "hdgst": false, 00:19:29.424 "ddgst": false 00:19:29.424 }, 00:19:29.424 "method": "bdev_nvme_attach_controller" 00:19:29.424 },{ 00:19:29.424 "params": { 00:19:29.424 "name": "Nvme3", 00:19:29.424 "trtype": "tcp", 00:19:29.424 "traddr": "10.0.0.2", 00:19:29.424 "adrfam": "ipv4", 00:19:29.424 "trsvcid": "4420", 00:19:29.424 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:29.424 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:29.424 "hdgst": false, 00:19:29.424 "ddgst": false 00:19:29.424 }, 00:19:29.424 "method": "bdev_nvme_attach_controller" 00:19:29.424 },{ 00:19:29.424 "params": { 00:19:29.424 "name": "Nvme4", 00:19:29.424 "trtype": "tcp", 00:19:29.424 "traddr": "10.0.0.2", 00:19:29.424 "adrfam": "ipv4", 00:19:29.424 "trsvcid": "4420", 00:19:29.424 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:29.424 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:29.424 "hdgst": false, 00:19:29.424 "ddgst": false 00:19:29.424 }, 00:19:29.424 "method": "bdev_nvme_attach_controller" 00:19:29.424 },{ 00:19:29.424 "params": { 00:19:29.424 "name": "Nvme5", 00:19:29.424 "trtype": "tcp", 00:19:29.424 "traddr": "10.0.0.2", 00:19:29.424 "adrfam": "ipv4", 00:19:29.424 "trsvcid": "4420", 00:19:29.424 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:29.424 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:29.424 "hdgst": false, 00:19:29.424 "ddgst": false 00:19:29.424 }, 00:19:29.425 "method": "bdev_nvme_attach_controller" 00:19:29.425 },{ 00:19:29.425 "params": { 00:19:29.425 "name": "Nvme6", 00:19:29.425 "trtype": "tcp", 00:19:29.425 "traddr": "10.0.0.2", 00:19:29.425 "adrfam": "ipv4", 00:19:29.425 "trsvcid": "4420", 00:19:29.425 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:29.425 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:29.425 "hdgst": false, 00:19:29.425 "ddgst": false 00:19:29.425 }, 00:19:29.425 "method": "bdev_nvme_attach_controller" 00:19:29.425 },{ 00:19:29.425 "params": { 00:19:29.425 "name": "Nvme7", 00:19:29.425 "trtype": "tcp", 00:19:29.425 "traddr": "10.0.0.2", 00:19:29.425 "adrfam": "ipv4", 00:19:29.425 "trsvcid": "4420", 00:19:29.425 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:29.425 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:29.425 "hdgst": false, 00:19:29.425 "ddgst": false 00:19:29.425 }, 00:19:29.425 "method": "bdev_nvme_attach_controller" 00:19:29.425 },{ 00:19:29.425 "params": { 00:19:29.425 "name": "Nvme8", 00:19:29.425 "trtype": "tcp", 00:19:29.425 "traddr": "10.0.0.2", 00:19:29.425 "adrfam": "ipv4", 00:19:29.425 "trsvcid": "4420", 00:19:29.425 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:29.425 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:29.425 "hdgst": false, 00:19:29.425 "ddgst": false 00:19:29.425 }, 00:19:29.425 "method": "bdev_nvme_attach_controller" 00:19:29.425 },{ 00:19:29.425 "params": { 00:19:29.425 "name": "Nvme9", 00:19:29.425 "trtype": "tcp", 00:19:29.425 "traddr": "10.0.0.2", 00:19:29.425 "adrfam": "ipv4", 00:19:29.425 "trsvcid": "4420", 00:19:29.425 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:29.425 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:29.425 "hdgst": false, 00:19:29.425 "ddgst": false 00:19:29.425 }, 00:19:29.425 "method": "bdev_nvme_attach_controller" 00:19:29.425 },{ 00:19:29.425 "params": { 00:19:29.425 "name": "Nvme10", 00:19:29.425 "trtype": "tcp", 00:19:29.425 "traddr": "10.0.0.2", 00:19:29.425 "adrfam": "ipv4", 00:19:29.425 "trsvcid": "4420", 00:19:29.425 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:29.425 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:29.425 "hdgst": false, 00:19:29.425 "ddgst": false 00:19:29.425 }, 00:19:29.425 "method": "bdev_nvme_attach_controller" 00:19:29.425 }' 00:19:29.425 [2024-04-24 22:11:11.548145] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:19:29.425 [2024-04-24 22:11:11.548232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3981635 ] 00:19:29.425 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.425 [2024-04-24 22:11:11.618977] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.683 [2024-04-24 22:11:11.737876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.581 Running I/O for 10 seconds... 00:19:31.581 22:11:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:31.581 22:11:13 -- common/autotest_common.sh@850 -- # return 0 00:19:31.581 22:11:13 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:31.581 22:11:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.581 22:11:13 -- common/autotest_common.sh@10 -- # set +x 00:19:31.839 22:11:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.839 22:11:13 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:31.839 22:11:13 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:31.839 22:11:13 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:31.839 22:11:13 -- target/shutdown.sh@57 -- # local ret=1 00:19:31.839 22:11:13 -- target/shutdown.sh@58 -- # local i 00:19:31.839 22:11:13 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:31.839 22:11:13 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:31.839 22:11:13 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:31.839 22:11:13 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:31.839 22:11:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.839 22:11:13 -- common/autotest_common.sh@10 -- # set +x 00:19:31.839 22:11:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.839 22:11:13 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:31.839 22:11:13 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:31.839 22:11:13 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:32.098 22:11:14 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:32.098 22:11:14 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:32.098 22:11:14 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:32.098 22:11:14 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:32.098 22:11:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:32.098 22:11:14 -- common/autotest_common.sh@10 -- # set +x 00:19:32.098 22:11:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:32.098 22:11:14 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:32.098 22:11:14 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:32.098 22:11:14 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:32.356 22:11:14 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:32.356 22:11:14 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:32.356 22:11:14 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:32.356 22:11:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:32.356 22:11:14 -- common/autotest_common.sh@10 -- # set +x 00:19:32.356 22:11:14 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:32.356 22:11:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:32.356 22:11:14 -- target/shutdown.sh@60 -- # read_io_count=131 00:19:32.356 22:11:14 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:19:32.356 22:11:14 -- target/shutdown.sh@64 -- # ret=0 00:19:32.356 22:11:14 -- target/shutdown.sh@65 -- # break 00:19:32.356 22:11:14 -- target/shutdown.sh@69 -- # return 0 00:19:32.356 22:11:14 -- target/shutdown.sh@110 -- # killprocess 3981635 00:19:32.356 22:11:14 -- common/autotest_common.sh@936 -- # '[' -z 3981635 ']' 00:19:32.356 22:11:14 -- common/autotest_common.sh@940 -- # kill -0 3981635 00:19:32.356 22:11:14 -- common/autotest_common.sh@941 -- # uname 00:19:32.356 22:11:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:32.356 22:11:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3981635 00:19:32.356 22:11:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:32.356 22:11:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:32.356 22:11:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3981635' 00:19:32.356 killing process with pid 3981635 00:19:32.356 22:11:14 -- common/autotest_common.sh@955 -- # kill 3981635 00:19:32.356 22:11:14 -- common/autotest_common.sh@960 -- # wait 3981635 00:19:32.614 Received shutdown signal, test time was about 1.003508 seconds 00:19:32.614 00:19:32.614 Latency(us) 00:19:32.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.614 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.614 Verification LBA range: start 0x0 length 0x400 00:19:32.614 Nvme1n1 : 0.92 208.81 13.05 0.00 0.00 302297.44 26796.94 267192.70 00:19:32.614 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.615 Verification LBA range: start 0x0 length 0x400 00:19:32.615 Nvme2n1 : 0.92 208.11 13.01 0.00 0.00 295887.45 38253.61 240784.12 00:19:32.615 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.615 Verification LBA range: start 0x0 length 0x400 00:19:32.615 Nvme3n1 : 0.93 206.45 12.90 0.00 0.00 292659.20 22427.88 299815.06 00:19:32.615 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.615 Verification LBA range: start 0x0 length 0x400 00:19:32.615 Nvme4n1 : 0.95 202.36 12.65 0.00 0.00 292015.22 20583.16 282727.16 00:19:32.615 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.615 Verification LBA range: start 0x0 length 0x400 00:19:32.615 Nvme5n1 : 0.96 200.29 12.52 0.00 0.00 288656.50 22039.51 302921.96 00:19:32.615 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.615 Verification LBA range: start 0x0 length 0x400 00:19:32.615 Nvme6n1 : 0.96 205.06 12.82 0.00 0.00 274657.76 2148.12 285834.05 00:19:32.615 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.615 Verification LBA range: start 0x0 length 0x400 00:19:32.615 Nvme7n1 : 1.00 255.34 15.96 0.00 0.00 206800.02 18835.53 284280.60 00:19:32.615 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.615 Verification LBA range: start 0x0 length 0x400 00:19:32.615 Nvme8n1 : 0.94 209.27 13.08 0.00 0.00 255079.68 3713.71 281173.71 00:19:32.615 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.615 Verification LBA range: start 0x0 length 0x400 00:19:32.615 Nvme9n1 : 0.94 203.62 12.73 0.00 0.00 257210.85 23690.05 278066.82 00:19:32.615 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.615 Verification LBA range: start 0x0 length 0x400 00:19:32.615 Nvme10n1 : 0.97 198.72 12.42 0.00 0.00 258429.03 22233.69 315349.52 00:19:32.615 =================================================================================================================== 00:19:32.615 Total : 2098.01 131.13 0.00 0.00 270232.55 2148.12 315349.52 00:19:32.873 22:11:15 -- target/shutdown.sh@113 -- # sleep 1 00:19:33.807 22:11:16 -- target/shutdown.sh@114 -- # kill -0 3981454 00:19:33.807 22:11:16 -- target/shutdown.sh@116 -- # stoptarget 00:19:33.807 22:11:16 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:33.807 22:11:16 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:33.807 22:11:16 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:33.807 22:11:16 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:33.807 22:11:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:33.807 22:11:16 -- nvmf/common.sh@117 -- # sync 00:19:33.807 22:11:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:33.807 22:11:16 -- nvmf/common.sh@120 -- # set +e 00:19:33.807 22:11:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:33.807 22:11:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:33.807 rmmod nvme_tcp 00:19:33.807 rmmod nvme_fabrics 00:19:34.066 rmmod nvme_keyring 00:19:34.066 22:11:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:34.066 22:11:16 -- nvmf/common.sh@124 -- # set -e 00:19:34.066 22:11:16 -- nvmf/common.sh@125 -- # return 0 00:19:34.066 22:11:16 -- nvmf/common.sh@478 -- # '[' -n 3981454 ']' 00:19:34.066 22:11:16 -- nvmf/common.sh@479 -- # killprocess 3981454 00:19:34.066 22:11:16 -- common/autotest_common.sh@936 -- # '[' -z 3981454 ']' 00:19:34.066 22:11:16 -- common/autotest_common.sh@940 -- # kill -0 3981454 00:19:34.066 22:11:16 -- common/autotest_common.sh@941 -- # uname 00:19:34.066 22:11:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:34.066 22:11:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3981454 00:19:34.066 22:11:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:34.066 22:11:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:34.066 22:11:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3981454' 00:19:34.066 killing process with pid 3981454 00:19:34.066 22:11:16 -- common/autotest_common.sh@955 -- # kill 3981454 00:19:34.066 [2024-04-24 22:11:16.142581] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:34.066 22:11:16 -- common/autotest_common.sh@960 -- # wait 3981454 00:19:34.633 22:11:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:34.633 22:11:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:34.633 22:11:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:34.633 22:11:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:34.633 22:11:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:34.633 22:11:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.633 22:11:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:34.633 22:11:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.538 22:11:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:36.538 00:19:36.538 real 0m8.446s 00:19:36.538 user 0m26.768s 00:19:36.538 sys 0m1.581s 00:19:36.538 22:11:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:36.538 22:11:18 -- common/autotest_common.sh@10 -- # set +x 00:19:36.538 ************************************ 00:19:36.538 END TEST nvmf_shutdown_tc2 00:19:36.538 ************************************ 00:19:36.538 22:11:18 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:19:36.538 22:11:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:36.538 22:11:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:36.538 22:11:18 -- common/autotest_common.sh@10 -- # set +x 00:19:36.796 ************************************ 00:19:36.796 START TEST nvmf_shutdown_tc3 00:19:36.796 ************************************ 00:19:36.796 22:11:18 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:19:36.796 22:11:18 -- target/shutdown.sh@121 -- # starttarget 00:19:36.796 22:11:18 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:36.796 22:11:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:36.796 22:11:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:36.796 22:11:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:36.796 22:11:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:36.796 22:11:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:36.796 22:11:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.796 22:11:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:36.797 22:11:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.797 22:11:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:36.797 22:11:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:36.797 22:11:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:36.797 22:11:18 -- common/autotest_common.sh@10 -- # set +x 00:19:36.797 22:11:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:36.797 22:11:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:36.797 22:11:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:36.797 22:11:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:36.797 22:11:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:36.797 22:11:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:36.797 22:11:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:36.797 22:11:18 -- nvmf/common.sh@295 -- # net_devs=() 00:19:36.797 22:11:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:36.797 22:11:18 -- nvmf/common.sh@296 -- # e810=() 00:19:36.797 22:11:18 -- nvmf/common.sh@296 -- # local -ga e810 00:19:36.797 22:11:18 -- nvmf/common.sh@297 -- # x722=() 00:19:36.797 22:11:18 -- nvmf/common.sh@297 -- # local -ga x722 00:19:36.797 22:11:18 -- nvmf/common.sh@298 -- # mlx=() 00:19:36.797 22:11:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:36.797 22:11:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:36.797 22:11:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:36.797 22:11:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:36.797 22:11:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:36.797 22:11:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:36.797 22:11:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:36.797 22:11:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:36.797 22:11:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:36.797 22:11:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:36.797 22:11:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:36.797 22:11:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:36.797 22:11:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:36.797 22:11:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:36.797 22:11:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:36.797 22:11:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:36.797 22:11:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:36.797 22:11:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:36.797 22:11:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:36.797 22:11:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:36.797 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:36.797 22:11:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:36.797 22:11:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:36.797 22:11:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.797 22:11:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.797 22:11:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:36.797 22:11:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:36.797 22:11:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:36.797 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:36.797 22:11:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:36.797 22:11:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:36.797 22:11:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.797 22:11:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.797 22:11:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:36.797 22:11:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:36.797 22:11:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:36.797 22:11:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:36.797 22:11:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:36.797 22:11:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.797 22:11:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:36.797 22:11:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.797 22:11:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:36.797 Found net devices under 0000:84:00.0: cvl_0_0 00:19:36.797 22:11:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.797 22:11:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:36.797 22:11:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.797 22:11:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:36.797 22:11:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.797 22:11:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:36.797 Found net devices under 0000:84:00.1: cvl_0_1 00:19:36.797 22:11:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.797 22:11:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:36.797 22:11:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:36.797 22:11:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:36.797 22:11:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:36.797 22:11:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:36.797 22:11:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:36.797 22:11:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:36.797 22:11:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:36.797 22:11:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:36.797 22:11:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:36.797 22:11:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:36.797 22:11:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:36.797 22:11:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:36.797 22:11:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:36.797 22:11:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:36.797 22:11:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:36.797 22:11:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:36.797 22:11:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:36.797 22:11:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:36.797 22:11:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:36.797 22:11:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:36.797 22:11:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:36.797 22:11:19 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:36.797 22:11:19 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:37.055 22:11:19 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:37.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:37.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:19:37.055 00:19:37.055 --- 10.0.0.2 ping statistics --- 00:19:37.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.055 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:19:37.055 22:11:19 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:37.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:19:37.055 00:19:37.055 --- 10.0.0.1 ping statistics --- 00:19:37.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.055 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:19:37.055 22:11:19 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:37.055 22:11:19 -- nvmf/common.sh@411 -- # return 0 00:19:37.055 22:11:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:37.055 22:11:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.055 22:11:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:37.055 22:11:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:37.055 22:11:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.055 22:11:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:37.055 22:11:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:37.055 22:11:19 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:37.055 22:11:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:37.055 22:11:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:37.055 22:11:19 -- common/autotest_common.sh@10 -- # set +x 00:19:37.055 22:11:19 -- nvmf/common.sh@470 -- # nvmfpid=3982566 00:19:37.055 22:11:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:37.055 22:11:19 -- nvmf/common.sh@471 -- # waitforlisten 3982566 00:19:37.055 22:11:19 -- common/autotest_common.sh@817 -- # '[' -z 3982566 ']' 00:19:37.055 22:11:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.055 22:11:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:37.055 22:11:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.055 22:11:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:37.055 22:11:19 -- common/autotest_common.sh@10 -- # set +x 00:19:37.055 [2024-04-24 22:11:19.143954] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:19:37.055 [2024-04-24 22:11:19.144047] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:37.055 EAL: No free 2048 kB hugepages reported on node 1 00:19:37.055 [2024-04-24 22:11:19.219714] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:37.314 [2024-04-24 22:11:19.343099] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:37.314 [2024-04-24 22:11:19.343163] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:37.314 [2024-04-24 22:11:19.343179] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:37.314 [2024-04-24 22:11:19.343193] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:37.314 [2024-04-24 22:11:19.343205] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:37.314 [2024-04-24 22:11:19.343306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.314 [2024-04-24 22:11:19.343367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:37.314 [2024-04-24 22:11:19.346416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:37.314 [2024-04-24 22:11:19.346423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.314 22:11:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:37.314 22:11:19 -- common/autotest_common.sh@850 -- # return 0 00:19:37.314 22:11:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:37.314 22:11:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:37.314 22:11:19 -- common/autotest_common.sh@10 -- # set +x 00:19:37.314 22:11:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.314 22:11:19 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:37.314 22:11:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.314 22:11:19 -- common/autotest_common.sh@10 -- # set +x 00:19:37.314 [2024-04-24 22:11:19.511401] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.314 22:11:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.314 22:11:19 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:37.314 22:11:19 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:37.314 22:11:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:37.314 22:11:19 -- common/autotest_common.sh@10 -- # set +x 00:19:37.314 22:11:19 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:37.314 22:11:19 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.314 22:11:19 -- target/shutdown.sh@28 -- # cat 00:19:37.314 22:11:19 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.314 22:11:19 -- target/shutdown.sh@28 -- # cat 00:19:37.314 22:11:19 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.314 22:11:19 -- target/shutdown.sh@28 -- # cat 00:19:37.314 22:11:19 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.314 22:11:19 -- target/shutdown.sh@28 -- # cat 00:19:37.314 22:11:19 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.314 22:11:19 -- target/shutdown.sh@28 -- # cat 00:19:37.314 22:11:19 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.314 22:11:19 -- target/shutdown.sh@28 -- # cat 00:19:37.314 22:11:19 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.314 22:11:19 -- target/shutdown.sh@28 -- # cat 00:19:37.314 22:11:19 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.314 22:11:19 -- target/shutdown.sh@28 -- # cat 00:19:37.314 22:11:19 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.314 22:11:19 -- target/shutdown.sh@28 -- # cat 00:19:37.314 22:11:19 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:37.314 22:11:19 -- target/shutdown.sh@28 -- # cat 00:19:37.314 22:11:19 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:37.314 22:11:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.314 22:11:19 -- common/autotest_common.sh@10 -- # set +x 00:19:37.572 Malloc1 00:19:37.572 [2024-04-24 22:11:19.597586] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:37.572 [2024-04-24 22:11:19.597944] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.572 Malloc2 00:19:37.572 Malloc3 00:19:37.572 Malloc4 00:19:37.572 Malloc5 00:19:37.572 Malloc6 00:19:37.829 Malloc7 00:19:37.829 Malloc8 00:19:37.829 Malloc9 00:19:37.829 Malloc10 00:19:37.829 22:11:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.829 22:11:20 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:37.829 22:11:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:37.829 22:11:20 -- common/autotest_common.sh@10 -- # set +x 00:19:37.829 22:11:20 -- target/shutdown.sh@125 -- # perfpid=3982743 00:19:37.829 22:11:20 -- target/shutdown.sh@126 -- # waitforlisten 3982743 /var/tmp/bdevperf.sock 00:19:37.829 22:11:20 -- common/autotest_common.sh@817 -- # '[' -z 3982743 ']' 00:19:37.829 22:11:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:37.829 22:11:20 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:37.829 22:11:20 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:37.829 22:11:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:37.829 22:11:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:37.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:37.829 22:11:20 -- nvmf/common.sh@521 -- # config=() 00:19:37.829 22:11:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:37.829 22:11:20 -- nvmf/common.sh@521 -- # local subsystem config 00:19:37.829 22:11:20 -- common/autotest_common.sh@10 -- # set +x 00:19:37.829 22:11:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:37.829 22:11:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:37.829 { 00:19:37.829 "params": { 00:19:37.829 "name": "Nvme$subsystem", 00:19:37.829 "trtype": "$TEST_TRANSPORT", 00:19:37.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.829 "adrfam": "ipv4", 00:19:37.829 "trsvcid": "$NVMF_PORT", 00:19:37.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.829 "hdgst": ${hdgst:-false}, 00:19:37.829 "ddgst": ${ddgst:-false} 00:19:37.829 }, 00:19:37.829 "method": "bdev_nvme_attach_controller" 00:19:37.829 } 00:19:37.829 EOF 00:19:37.829 )") 00:19:37.829 22:11:20 -- nvmf/common.sh@543 -- # cat 00:19:37.829 22:11:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:37.829 22:11:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:37.829 { 00:19:37.829 "params": { 00:19:37.829 "name": "Nvme$subsystem", 00:19:37.829 "trtype": "$TEST_TRANSPORT", 00:19:37.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.829 "adrfam": "ipv4", 00:19:37.829 "trsvcid": "$NVMF_PORT", 00:19:37.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.829 "hdgst": ${hdgst:-false}, 00:19:37.829 "ddgst": ${ddgst:-false} 00:19:37.829 }, 00:19:37.829 "method": "bdev_nvme_attach_controller" 00:19:37.829 } 00:19:37.829 EOF 00:19:37.829 )") 00:19:37.829 22:11:20 -- nvmf/common.sh@543 -- # cat 00:19:37.829 22:11:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:37.829 22:11:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:37.829 { 00:19:37.829 "params": { 00:19:37.829 "name": "Nvme$subsystem", 00:19:37.829 "trtype": "$TEST_TRANSPORT", 00:19:37.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.829 "adrfam": "ipv4", 00:19:37.829 "trsvcid": "$NVMF_PORT", 00:19:37.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.829 "hdgst": ${hdgst:-false}, 00:19:37.829 "ddgst": ${ddgst:-false} 00:19:37.829 }, 00:19:37.829 "method": "bdev_nvme_attach_controller" 00:19:37.829 } 00:19:37.830 EOF 00:19:37.830 )") 00:19:37.830 22:11:20 -- nvmf/common.sh@543 -- # cat 00:19:37.830 22:11:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:37.830 22:11:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:37.830 { 00:19:37.830 "params": { 00:19:37.830 "name": "Nvme$subsystem", 00:19:37.830 "trtype": "$TEST_TRANSPORT", 00:19:37.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.830 "adrfam": "ipv4", 00:19:37.830 "trsvcid": "$NVMF_PORT", 00:19:37.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.830 "hdgst": ${hdgst:-false}, 00:19:37.830 "ddgst": ${ddgst:-false} 00:19:37.830 }, 00:19:37.830 "method": "bdev_nvme_attach_controller" 00:19:37.830 } 00:19:37.830 EOF 00:19:37.830 )") 00:19:37.830 22:11:20 -- nvmf/common.sh@543 -- # cat 00:19:38.088 22:11:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:38.088 22:11:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:38.088 { 00:19:38.088 "params": { 00:19:38.088 "name": "Nvme$subsystem", 00:19:38.088 "trtype": "$TEST_TRANSPORT", 00:19:38.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:38.088 "adrfam": "ipv4", 00:19:38.088 "trsvcid": "$NVMF_PORT", 00:19:38.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:38.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:38.088 "hdgst": ${hdgst:-false}, 00:19:38.088 "ddgst": ${ddgst:-false} 00:19:38.088 }, 00:19:38.088 "method": "bdev_nvme_attach_controller" 00:19:38.088 } 00:19:38.088 EOF 00:19:38.088 )") 00:19:38.088 22:11:20 -- nvmf/common.sh@543 -- # cat 00:19:38.088 22:11:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:38.088 22:11:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:38.088 { 00:19:38.088 "params": { 00:19:38.088 "name": "Nvme$subsystem", 00:19:38.088 "trtype": "$TEST_TRANSPORT", 00:19:38.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:38.088 "adrfam": "ipv4", 00:19:38.088 "trsvcid": "$NVMF_PORT", 00:19:38.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:38.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:38.088 "hdgst": ${hdgst:-false}, 00:19:38.088 "ddgst": ${ddgst:-false} 00:19:38.088 }, 00:19:38.088 "method": "bdev_nvme_attach_controller" 00:19:38.088 } 00:19:38.088 EOF 00:19:38.088 )") 00:19:38.088 22:11:20 -- nvmf/common.sh@543 -- # cat 00:19:38.088 22:11:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:38.088 22:11:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:38.088 { 00:19:38.088 "params": { 00:19:38.088 "name": "Nvme$subsystem", 00:19:38.088 "trtype": "$TEST_TRANSPORT", 00:19:38.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:38.088 "adrfam": "ipv4", 00:19:38.088 "trsvcid": "$NVMF_PORT", 00:19:38.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:38.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:38.088 "hdgst": ${hdgst:-false}, 00:19:38.088 "ddgst": ${ddgst:-false} 00:19:38.088 }, 00:19:38.088 "method": "bdev_nvme_attach_controller" 00:19:38.088 } 00:19:38.088 EOF 00:19:38.088 )") 00:19:38.088 22:11:20 -- nvmf/common.sh@543 -- # cat 00:19:38.088 22:11:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:38.088 22:11:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:38.088 { 00:19:38.088 "params": { 00:19:38.088 "name": "Nvme$subsystem", 00:19:38.088 "trtype": "$TEST_TRANSPORT", 00:19:38.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:38.088 "adrfam": "ipv4", 00:19:38.088 "trsvcid": "$NVMF_PORT", 00:19:38.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:38.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:38.088 "hdgst": ${hdgst:-false}, 00:19:38.088 "ddgst": ${ddgst:-false} 00:19:38.088 }, 00:19:38.088 "method": "bdev_nvme_attach_controller" 00:19:38.088 } 00:19:38.088 EOF 00:19:38.088 )") 00:19:38.088 22:11:20 -- nvmf/common.sh@543 -- # cat 00:19:38.088 22:11:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:38.088 22:11:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:38.088 { 00:19:38.088 "params": { 00:19:38.088 "name": "Nvme$subsystem", 00:19:38.088 "trtype": "$TEST_TRANSPORT", 00:19:38.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:38.088 "adrfam": "ipv4", 00:19:38.088 "trsvcid": "$NVMF_PORT", 00:19:38.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:38.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:38.088 "hdgst": ${hdgst:-false}, 00:19:38.088 "ddgst": ${ddgst:-false} 00:19:38.088 }, 00:19:38.088 "method": "bdev_nvme_attach_controller" 00:19:38.088 } 00:19:38.088 EOF 00:19:38.088 )") 00:19:38.088 22:11:20 -- nvmf/common.sh@543 -- # cat 00:19:38.088 22:11:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:38.088 22:11:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:38.088 { 00:19:38.088 "params": { 00:19:38.088 "name": "Nvme$subsystem", 00:19:38.088 "trtype": "$TEST_TRANSPORT", 00:19:38.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:38.088 "adrfam": "ipv4", 00:19:38.088 "trsvcid": "$NVMF_PORT", 00:19:38.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:38.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:38.088 "hdgst": ${hdgst:-false}, 00:19:38.088 "ddgst": ${ddgst:-false} 00:19:38.088 }, 00:19:38.088 "method": "bdev_nvme_attach_controller" 00:19:38.088 } 00:19:38.088 EOF 00:19:38.088 )") 00:19:38.088 22:11:20 -- nvmf/common.sh@543 -- # cat 00:19:38.088 22:11:20 -- nvmf/common.sh@545 -- # jq . 00:19:38.088 22:11:20 -- nvmf/common.sh@546 -- # IFS=, 00:19:38.088 22:11:20 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:38.088 "params": { 00:19:38.088 "name": "Nvme1", 00:19:38.088 "trtype": "tcp", 00:19:38.088 "traddr": "10.0.0.2", 00:19:38.088 "adrfam": "ipv4", 00:19:38.088 "trsvcid": "4420", 00:19:38.088 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.088 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:38.088 "hdgst": false, 00:19:38.088 "ddgst": false 00:19:38.088 }, 00:19:38.088 "method": "bdev_nvme_attach_controller" 00:19:38.088 },{ 00:19:38.088 "params": { 00:19:38.088 "name": "Nvme2", 00:19:38.088 "trtype": "tcp", 00:19:38.088 "traddr": "10.0.0.2", 00:19:38.088 "adrfam": "ipv4", 00:19:38.088 "trsvcid": "4420", 00:19:38.088 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:38.088 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:38.088 "hdgst": false, 00:19:38.088 "ddgst": false 00:19:38.088 }, 00:19:38.088 "method": "bdev_nvme_attach_controller" 00:19:38.088 },{ 00:19:38.088 "params": { 00:19:38.088 "name": "Nvme3", 00:19:38.088 "trtype": "tcp", 00:19:38.088 "traddr": "10.0.0.2", 00:19:38.088 "adrfam": "ipv4", 00:19:38.088 "trsvcid": "4420", 00:19:38.088 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:38.088 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:38.088 "hdgst": false, 00:19:38.088 "ddgst": false 00:19:38.088 }, 00:19:38.088 "method": "bdev_nvme_attach_controller" 00:19:38.088 },{ 00:19:38.088 "params": { 00:19:38.088 "name": "Nvme4", 00:19:38.088 "trtype": "tcp", 00:19:38.088 "traddr": "10.0.0.2", 00:19:38.089 "adrfam": "ipv4", 00:19:38.089 "trsvcid": "4420", 00:19:38.089 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:38.089 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:38.089 "hdgst": false, 00:19:38.089 "ddgst": false 00:19:38.089 }, 00:19:38.089 "method": "bdev_nvme_attach_controller" 00:19:38.089 },{ 00:19:38.089 "params": { 00:19:38.089 "name": "Nvme5", 00:19:38.089 "trtype": "tcp", 00:19:38.089 "traddr": "10.0.0.2", 00:19:38.089 "adrfam": "ipv4", 00:19:38.089 "trsvcid": "4420", 00:19:38.089 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:38.089 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:38.089 "hdgst": false, 00:19:38.089 "ddgst": false 00:19:38.089 }, 00:19:38.089 "method": "bdev_nvme_attach_controller" 00:19:38.089 },{ 00:19:38.089 "params": { 00:19:38.089 "name": "Nvme6", 00:19:38.089 "trtype": "tcp", 00:19:38.089 "traddr": "10.0.0.2", 00:19:38.089 "adrfam": "ipv4", 00:19:38.089 "trsvcid": "4420", 00:19:38.089 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:38.089 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:38.089 "hdgst": false, 00:19:38.089 "ddgst": false 00:19:38.089 }, 00:19:38.089 "method": "bdev_nvme_attach_controller" 00:19:38.089 },{ 00:19:38.089 "params": { 00:19:38.089 "name": "Nvme7", 00:19:38.089 "trtype": "tcp", 00:19:38.089 "traddr": "10.0.0.2", 00:19:38.089 "adrfam": "ipv4", 00:19:38.089 "trsvcid": "4420", 00:19:38.089 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:38.089 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:38.089 "hdgst": false, 00:19:38.089 "ddgst": false 00:19:38.089 }, 00:19:38.089 "method": "bdev_nvme_attach_controller" 00:19:38.089 },{ 00:19:38.089 "params": { 00:19:38.089 "name": "Nvme8", 00:19:38.089 "trtype": "tcp", 00:19:38.089 "traddr": "10.0.0.2", 00:19:38.089 "adrfam": "ipv4", 00:19:38.089 "trsvcid": "4420", 00:19:38.089 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:38.089 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:38.089 "hdgst": false, 00:19:38.089 "ddgst": false 00:19:38.089 }, 00:19:38.089 "method": "bdev_nvme_attach_controller" 00:19:38.089 },{ 00:19:38.089 "params": { 00:19:38.089 "name": "Nvme9", 00:19:38.089 "trtype": "tcp", 00:19:38.089 "traddr": "10.0.0.2", 00:19:38.089 "adrfam": "ipv4", 00:19:38.089 "trsvcid": "4420", 00:19:38.089 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:38.089 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:38.089 "hdgst": false, 00:19:38.089 "ddgst": false 00:19:38.089 }, 00:19:38.089 "method": "bdev_nvme_attach_controller" 00:19:38.089 },{ 00:19:38.089 "params": { 00:19:38.089 "name": "Nvme10", 00:19:38.089 "trtype": "tcp", 00:19:38.089 "traddr": "10.0.0.2", 00:19:38.089 "adrfam": "ipv4", 00:19:38.089 "trsvcid": "4420", 00:19:38.089 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:38.089 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:38.089 "hdgst": false, 00:19:38.089 "ddgst": false 00:19:38.089 }, 00:19:38.089 "method": "bdev_nvme_attach_controller" 00:19:38.089 }' 00:19:38.089 [2024-04-24 22:11:20.115378] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:19:38.089 [2024-04-24 22:11:20.115483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3982743 ] 00:19:38.089 EAL: No free 2048 kB hugepages reported on node 1 00:19:38.089 [2024-04-24 22:11:20.188079] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.089 [2024-04-24 22:11:20.310755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.987 Running I/O for 10 seconds... 00:19:39.987 22:11:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:39.987 22:11:22 -- common/autotest_common.sh@850 -- # return 0 00:19:39.987 22:11:22 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:39.987 22:11:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.987 22:11:22 -- common/autotest_common.sh@10 -- # set +x 00:19:39.987 22:11:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.987 22:11:22 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:39.987 22:11:22 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:39.987 22:11:22 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:39.987 22:11:22 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:39.987 22:11:22 -- target/shutdown.sh@57 -- # local ret=1 00:19:39.987 22:11:22 -- target/shutdown.sh@58 -- # local i 00:19:39.987 22:11:22 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:39.987 22:11:22 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:39.987 22:11:22 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:39.987 22:11:22 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:39.987 22:11:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.987 22:11:22 -- common/autotest_common.sh@10 -- # set +x 00:19:39.987 22:11:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.246 22:11:22 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:40.246 22:11:22 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:40.246 22:11:22 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:40.525 22:11:22 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:40.525 22:11:22 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:40.525 22:11:22 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:40.525 22:11:22 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:40.525 22:11:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.525 22:11:22 -- common/autotest_common.sh@10 -- # set +x 00:19:40.525 22:11:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.525 22:11:22 -- target/shutdown.sh@60 -- # read_io_count=131 00:19:40.525 22:11:22 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:19:40.525 22:11:22 -- target/shutdown.sh@64 -- # ret=0 00:19:40.525 22:11:22 -- target/shutdown.sh@65 -- # break 00:19:40.525 22:11:22 -- target/shutdown.sh@69 -- # return 0 00:19:40.525 22:11:22 -- target/shutdown.sh@135 -- # killprocess 3982566 00:19:40.525 22:11:22 -- common/autotest_common.sh@936 -- # '[' -z 3982566 ']' 00:19:40.525 22:11:22 -- common/autotest_common.sh@940 -- # kill -0 3982566 00:19:40.525 22:11:22 -- common/autotest_common.sh@941 -- # uname 00:19:40.525 22:11:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:40.525 22:11:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3982566 00:19:40.525 22:11:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:40.525 22:11:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:40.525 22:11:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3982566' 00:19:40.525 killing process with pid 3982566 00:19:40.525 22:11:22 -- common/autotest_common.sh@955 -- # kill 3982566 00:19:40.525 [2024-04-24 22:11:22.578523] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:40.525 22:11:22 -- common/autotest_common.sh@960 -- # wait 3982566 00:19:40.525 [2024-04-24 22:11:22.579059] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.525 [2024-04-24 22:11:22.579095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.525 [2024-04-24 22:11:22.579112] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579127] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579141] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579156] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579183] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579197] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579212] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579227] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579240] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579254] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579268] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579323] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579338] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579359] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579374] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579388] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579412] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579426] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579455] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579469] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579482] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579496] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579510] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579524] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579538] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579551] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579566] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579579] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579593] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579606] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579619] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579633] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579647] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579660] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579674] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579701] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579714] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579747] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579761] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579775] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579789] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579817] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579858] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579872] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579907] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579920] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579934] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579947] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579961] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.579988] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913f20 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.581320] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.581358] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.581376] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.581390] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.581412] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.581428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.581443] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.581457] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.581478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.581493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.526 [2024-04-24 22:11:22.581507] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581521] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581536] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581564] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581593] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581607] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581649] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581663] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581677] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581706] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581720] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581734] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581749] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581763] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581777] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581807] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581821] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581836] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581851] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581870] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581885] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581899] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581914] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581928] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581944] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581959] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581974] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.581988] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.582003] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.582017] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.582032] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.582047] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.582062] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.582076] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.582090] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.582103] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.582117] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.582131] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.582145] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.582158] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.582171] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.582185] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.582198] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.582211] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.582225] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.582238] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.582255] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65280 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.583703] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.583731] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.583748] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.583762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.583776] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.583790] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.583804] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.583818] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.583832] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.583846] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.583860] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.583874] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.583889] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.583904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.583918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.583933] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.583947] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.583962] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.583977] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.583991] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.584006] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.584021] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.584035] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.584050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.584064] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.584079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.584094] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.584115] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.584130] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.584144] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.527 [2024-04-24 22:11:22.584158] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584173] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584188] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584217] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584247] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584261] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584275] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584292] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584308] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584323] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584337] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584351] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584365] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584379] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584399] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584416] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584430] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584445] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584459] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584473] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584487] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584525] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584539] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584554] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584568] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584582] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584609] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584623] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.584637] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9143b0 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586312] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586351] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586368] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586381] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586420] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586434] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586448] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586463] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586477] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586491] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586504] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586518] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586532] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586545] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586559] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586573] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586587] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586607] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586622] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586636] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586650] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586665] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586693] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586708] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586723] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586736] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586751] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586793] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586807] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586821] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586834] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586848] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586862] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586879] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586907] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586921] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586960] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.586989] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.587003] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.587021] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.587036] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.587050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.528 [2024-04-24 22:11:22.587065] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.587079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.587093] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.587107] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.587121] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.587136] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.587150] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.587163] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.587177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.587191] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.587204] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.587218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.587232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.587245] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914840 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.587270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.529 [2024-04-24 22:11:22.587328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.529 [2024-04-24 22:11:22.587349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.529 [2024-04-24 22:11:22.587365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.529 [2024-04-24 22:11:22.587381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.529 [2024-04-24 22:11:22.587404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.529 [2024-04-24 22:11:22.587422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.529 [2024-04-24 22:11:22.587437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.529 [2024-04-24 22:11:22.587452] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24885f0 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.587571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.529 [2024-04-24 22:11:22.587596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.529 [2024-04-24 22:11:22.587613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.529 [2024-04-24 22:11:22.587628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.529 [2024-04-24 22:11:22.587644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.529 [2024-04-24 22:11:22.587661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.529 [2024-04-24 22:11:22.587676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.529 [2024-04-24 22:11:22.587691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.529 [2024-04-24 22:11:22.587707] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2307200 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.587757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.529 [2024-04-24 22:11:22.587780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.529 [2024-04-24 22:11:22.587798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.529 [2024-04-24 22:11:22.587813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.529 [2024-04-24 22:11:22.587829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.529 [2024-04-24 22:11:22.587854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.529 [2024-04-24 22:11:22.587870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.529 [2024-04-24 22:11:22.587885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.529 [2024-04-24 22:11:22.587901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9190 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.587949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.529 [2024-04-24 22:11:22.587972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.529 [2024-04-24 22:11:22.587989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.529 [2024-04-24 22:11:22.588006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.529 [2024-04-24 22:11:22.588022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.529 [2024-04-24 22:11:22.588038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.529 [2024-04-24 22:11:22.588055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.529 [2024-04-24 22:11:22.588070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.529 [2024-04-24 22:11:22.588095] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9e40 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.588624] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.588657] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.588677] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.588691] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.588705] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.588719] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.588742] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.588756] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.588770] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.588783] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.588797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.588810] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.588824] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.588838] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.588851] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.588865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.588879] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.588893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.588907] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.588920] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.588934] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.588948] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.588961] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.588975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.588990] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.589003] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.589023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.529 [2024-04-24 22:11:22.589039] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589052] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589067] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589081] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589094] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589125] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589153] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589166] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589194] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589260] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589287] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589301] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589315] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589342] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589356] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589370] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589383] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589407] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589427] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589446] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589460] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589474] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589488] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589517] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589532] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589546] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.589559] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914cd0 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590488] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590518] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590534] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590548] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590576] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590590] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590603] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590617] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590630] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590644] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590660] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590674] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590701] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590715] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590742] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590763] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590778] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590807] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590821] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590834] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590848] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590863] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590876] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590932] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590961] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.530 [2024-04-24 22:11:22.590976] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.591001] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.591014] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.591028] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.591042] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.591056] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.591069] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.591083] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.591096] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.591110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.591124] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.591137] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.591155] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.591169] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.591183] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.591196] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.591210] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.591224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.591237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.591251] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.591265] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.591280] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.591293] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.591307] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.591320] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.591334] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.591347] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.591361] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.591383] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.591406] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x915160 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.592878] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.592914] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.592930] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.592943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.592958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.592971] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.592985] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.592999] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593013] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593032] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593046] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593074] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593089] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593103] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593117] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593131] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593145] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593160] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593173] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593188] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593201] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593229] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593243] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593256] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593270] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593284] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593301] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593315] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593329] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593344] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593358] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593374] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593388] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593411] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593426] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593455] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593469] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593484] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593498] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593512] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593526] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593540] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593554] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593568] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593582] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593611] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593625] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593639] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593658] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.531 [2024-04-24 22:11:22.593700] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.593715] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.593728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.593743] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.593757] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.593771] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.593784] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.593798] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.593813] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc64040 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595126] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595143] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595158] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595173] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595187] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595201] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595229] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595243] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595299] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595313] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595342] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595356] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595370] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595407] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595423] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595445] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595460] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595475] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595491] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595505] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595520] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595539] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595554] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595569] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595583] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595597] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595613] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595627] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595652] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595695] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595709] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595723] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595738] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595752] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595781] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595795] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595809] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595823] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595837] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595851] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595881] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595895] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595909] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595923] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595937] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595954] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595969] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595983] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.595997] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.596010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.596025] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.596040] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc644d0 is same with the state(5) to be set 00:19:40.532 [2024-04-24 22:11:22.613897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.532 [2024-04-24 22:11:22.613983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.532 [2024-04-24 22:11:22.614022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.532 [2024-04-24 22:11:22.614041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.532 [2024-04-24 22:11:22.614060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.532 [2024-04-24 22:11:22.614076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.532 [2024-04-24 22:11:22.614094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.532 [2024-04-24 22:11:22.614119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.532 [2024-04-24 22:11:22.614137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.532 [2024-04-24 22:11:22.614152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.532 [2024-04-24 22:11:22.614170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.532 [2024-04-24 22:11:22.614185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.532 [2024-04-24 22:11:22.614203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.614219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.614236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.614252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.614270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.614285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.614315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.614332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.614349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.614365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.614382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.614407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.614426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.614448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.614466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.614481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.614499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.614515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.614534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.614550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.614568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.614584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.614602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.614617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.614635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.614657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.614675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.614690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.614707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.614722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.614740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.614761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.614780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.614796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.614814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.614829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.614847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.614863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.614880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.614896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.614914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.614930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.614948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.614964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.614982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.614998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.615015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.615031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.615048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.615063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.615080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.615095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.615112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.615127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.615144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.615159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.615180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.615196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.615213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.615228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.615245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.615260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.615276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.615291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.615308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.615323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.615341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.615357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.615374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.615390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.615416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.615432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.615450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.615465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.615482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.615497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.615515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.533 [2024-04-24 22:11:22.615530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.533 [2024-04-24 22:11:22.615547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.534 [2024-04-24 22:11:22.615562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.615579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.534 [2024-04-24 22:11:22.615598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.615616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.534 [2024-04-24 22:11:22.615632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.615657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.534 [2024-04-24 22:11:22.615674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.615692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.534 [2024-04-24 22:11:22.615707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.615724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.534 [2024-04-24 22:11:22.615739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.615758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.534 [2024-04-24 22:11:22.615772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.615790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.534 [2024-04-24 22:11:22.615805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.615822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.534 [2024-04-24 22:11:22.615838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.615855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.534 [2024-04-24 22:11:22.615870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.615887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.534 [2024-04-24 22:11:22.615903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.615920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.534 [2024-04-24 22:11:22.615936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.615953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.534 [2024-04-24 22:11:22.615968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.615985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.534 [2024-04-24 22:11:22.616000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.616023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.534 [2024-04-24 22:11:22.616039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.616058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.534 [2024-04-24 22:11:22.616074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.616091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.534 [2024-04-24 22:11:22.616107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.616125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.534 [2024-04-24 22:11:22.616140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.616158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.534 [2024-04-24 22:11:22.616173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.616230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:19:40.534 [2024-04-24 22:11:22.616324] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22d7990 was disconnected and freed. reset controller. 00:19:40.534 [2024-04-24 22:11:22.616548] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24885f0 (9): Bad file descriptor 00:19:40.534 [2024-04-24 22:11:22.616628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.534 [2024-04-24 22:11:22.616664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.616682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.534 [2024-04-24 22:11:22.616699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.616715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.534 [2024-04-24 22:11:22.616730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.616745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.534 [2024-04-24 22:11:22.616761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.616776] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492860 is same with the state(5) to be set 00:19:40.534 [2024-04-24 22:11:22.616827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.534 [2024-04-24 22:11:22.616850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.616867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.534 [2024-04-24 22:11:22.616882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.616903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.534 [2024-04-24 22:11:22.616919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.616945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.534 [2024-04-24 22:11:22.616961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.616975] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231b830 is same with the state(5) to be set 00:19:40.534 [2024-04-24 22:11:22.617026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.534 [2024-04-24 22:11:22.617049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.617066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.534 [2024-04-24 22:11:22.617082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.617098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.534 [2024-04-24 22:11:22.617113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.617129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.534 [2024-04-24 22:11:22.617144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.617159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea190 is same with the state(5) to be set 00:19:40.534 [2024-04-24 22:11:22.617210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.534 [2024-04-24 22:11:22.617240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.617257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.534 [2024-04-24 22:11:22.617272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.617288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.534 [2024-04-24 22:11:22.617303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.617320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.534 [2024-04-24 22:11:22.617336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.534 [2024-04-24 22:11:22.617350] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230a480 is same with the state(5) to be set 00:19:40.535 [2024-04-24 22:11:22.617383] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2307200 (9): Bad file descriptor 00:19:40.535 [2024-04-24 22:11:22.617427] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec9190 (9): Bad file descriptor 00:19:40.535 [2024-04-24 22:11:22.617475] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d9e40 (9): Bad file descriptor 00:19:40.535 [2024-04-24 22:11:22.617529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.535 [2024-04-24 22:11:22.617552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.535 [2024-04-24 22:11:22.617569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.535 [2024-04-24 22:11:22.617584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.535 [2024-04-24 22:11:22.617601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.535 [2024-04-24 22:11:22.617616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.535 [2024-04-24 22:11:22.617631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.535 [2024-04-24 22:11:22.617658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.535 [2024-04-24 22:11:22.617672] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488b10 is same with the state(5) to be set 00:19:40.535 [2024-04-24 22:11:22.617723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.535 [2024-04-24 22:11:22.617746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.535 [2024-04-24 22:11:22.617763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.535 [2024-04-24 22:11:22.617779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.535 [2024-04-24 22:11:22.617794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.535 [2024-04-24 22:11:22.617809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.535 [2024-04-24 22:11:22.617824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.535 [2024-04-24 22:11:22.617839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.535 [2024-04-24 22:11:22.617853] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24883e0 is same with the state(5) to be set 00:19:40.535 [2024-04-24 22:11:22.618551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.535 [2024-04-24 22:11:22.618577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.535 [2024-04-24 22:11:22.618603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.535 [2024-04-24 22:11:22.618620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.535 [2024-04-24 22:11:22.618639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.535 [2024-04-24 22:11:22.618655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.535 [2024-04-24 22:11:22.618674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.535 [2024-04-24 22:11:22.618695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.535 [2024-04-24 22:11:22.618713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.535 [2024-04-24 22:11:22.618729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.535 [2024-04-24 22:11:22.618748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.535 [2024-04-24 22:11:22.618765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.535 [2024-04-24 22:11:22.618783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.535 [2024-04-24 22:11:22.618799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.535 [2024-04-24 22:11:22.618816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.535 [2024-04-24 22:11:22.618831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.535 [2024-04-24 22:11:22.618853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.535 [2024-04-24 22:11:22.618870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.535 [2024-04-24 22:11:22.618887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.535 [2024-04-24 22:11:22.618902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.535 [2024-04-24 22:11:22.618920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.535 [2024-04-24 22:11:22.618935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.535 [2024-04-24 22:11:22.618955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.535 [2024-04-24 22:11:22.618972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.535 [2024-04-24 22:11:22.618990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.535 [2024-04-24 22:11:22.619007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.535 [2024-04-24 22:11:22.619025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.535 [2024-04-24 22:11:22.619040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.535 [2024-04-24 22:11:22.619058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.535 [2024-04-24 22:11:22.619074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.535 [2024-04-24 22:11:22.619092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.535 [2024-04-24 22:11:22.619108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.535 [2024-04-24 22:11:22.619130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.535 [2024-04-24 22:11:22.619147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.535 [2024-04-24 22:11:22.619164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.535 [2024-04-24 22:11:22.619180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.535 [2024-04-24 22:11:22.619197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.535 [2024-04-24 22:11:22.619213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.535 [2024-04-24 22:11:22.619231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.535 [2024-04-24 22:11:22.619247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.535 [2024-04-24 22:11:22.619264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.535 [2024-04-24 22:11:22.619280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.535 [2024-04-24 22:11:22.619298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.619313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.619331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.619346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.619364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.619380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.619407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.619425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.619451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.619467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.619485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.619501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.619519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.619552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.619571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.619591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.619610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.619626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.619654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.619670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.619687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.619703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.619720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.619736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.619753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.619769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.619787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.619804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.619822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.619837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.619855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.619871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.619888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.619904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.619921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.619937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.619954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.619971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.619988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.620004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.620025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.620043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.620062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.620077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.620095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.620112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.620132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.620148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.620165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.620180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.620198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.620213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.620231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.620247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.620265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.620280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.620298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.620314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.620332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.620347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.620364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.620380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.620407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.620425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.620454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.620475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.620492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.620509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.620526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.620542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.620560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.620575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.620593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.620609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.620626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.536 [2024-04-24 22:11:22.620653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.536 [2024-04-24 22:11:22.620670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.620686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.620705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.620720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.620738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.620754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.620773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.620788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.620806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.620823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.620839] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d6730 is same with the state(5) to be set 00:19:40.537 [2024-04-24 22:11:22.620922] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22d6730 was disconnected and freed. reset controller. 00:19:40.537 [2024-04-24 22:11:22.622219] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:40.537 [2024-04-24 22:11:22.622362] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:40.537 [2024-04-24 22:11:22.622463] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:40.537 [2024-04-24 22:11:22.622529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.622553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.622578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.622595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.622613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.622629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.622658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.622675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.622692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.622708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.622726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.622752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.622769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.622785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.622803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.622819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.622837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.622852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.622871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.622886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.622905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.622920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.622939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.622954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.622972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.622997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.623016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.623032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.623050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.623066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.623085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.623101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.623119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.623134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.623152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.623168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.623186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.623201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.623219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.623235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.623253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.623269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.623287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.623303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.623321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.623337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.623354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.623370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.623388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.623412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.623435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.623452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.623469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.623485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.623503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.623518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.623536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.623552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.623570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.623585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.623603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.623620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.623637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.537 [2024-04-24 22:11:22.623653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.537 [2024-04-24 22:11:22.623672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.623688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.623706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.623722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.623740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.623755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.623773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.623788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.623806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.623822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.623838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.623858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.623876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.623892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.623910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.623932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.623951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.623966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.623984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.624001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.624019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.624036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.624054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.624070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.624089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.624104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.624122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.624138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.624156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.624172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.624190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.624207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.624224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.624239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.624259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.624274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.624296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.624312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.624330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.624346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.624363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.624379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.624415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.624435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.624453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.624469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.624487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.624504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.624522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.624538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.624556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.624571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.624589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.624605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.624622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.624638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.624662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.624678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.624696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.624711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.624728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.624748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.624776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.624792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.624808] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239d5d0 is same with the state(5) to be set 00:19:40.538 [2024-04-24 22:11:22.624889] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x239d5d0 was disconnected and freed. reset controller. 00:19:40.538 [2024-04-24 22:11:22.626480] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:19:40.538 [2024-04-24 22:11:22.626536] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24883e0 (9): Bad file descriptor 00:19:40.538 [2024-04-24 22:11:22.626603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.626628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.626652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.626670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.626688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.626704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.626721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.538 [2024-04-24 22:11:22.626738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.538 [2024-04-24 22:11:22.626755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.626771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.626789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.626805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.626823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.626838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.626855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.626871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.626889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.626904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.626922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.626944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.626968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.626985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.627002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.627017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.627035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.627050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.627068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.627083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.627101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.627127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.627144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.627160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.627177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.627192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.627210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.627226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.627243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.627258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.627275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.627292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.627309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.627325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.627342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.627357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.627379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.627403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.627422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.627438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.627455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.627470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.627487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.627502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.627521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.627537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.627555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.627570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.627588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.627603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.627621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.627636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.627653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.627669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.627686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.627701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.627718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.627734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.627752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.627767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.627785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.627805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.627824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.627840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.627857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.627872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.539 [2024-04-24 22:11:22.627891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.539 [2024-04-24 22:11:22.627906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.627924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.627939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.627957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.627979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.627996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.628012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.628029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.628045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.628063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.628078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.628096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.628112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.628129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.628144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.628162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.628178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.628196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.628211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.628233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.628249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.628267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.628283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.628300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.628316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.628333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.628349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.628373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.628389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.628414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.628438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.628455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.628471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.628488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.628503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.628520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.628535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.628553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.628568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.628585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.628602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.628619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.628635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.628660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.628679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.628696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.628718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.628736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.628751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.628768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.628783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.628800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.628815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.628831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410350 is same with the state(5) to be set 00:19:40.540 [2024-04-24 22:11:22.628948] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2410350 was disconnected and freed. reset controller. 00:19:40.540 [2024-04-24 22:11:22.630906] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:19:40.540 [2024-04-24 22:11:22.630940] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:19:40.540 [2024-04-24 22:11:22.630965] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488b10 (9): Bad file descriptor 00:19:40.540 [2024-04-24 22:11:22.631024] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2492860 (9): Bad file descriptor 00:19:40.540 [2024-04-24 22:11:22.631064] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231b830 (9): Bad file descriptor 00:19:40.540 [2024-04-24 22:11:22.631097] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ea190 (9): Bad file descriptor 00:19:40.540 [2024-04-24 22:11:22.631133] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230a480 (9): Bad file descriptor 00:19:40.540 [2024-04-24 22:11:22.631170] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:40.540 [2024-04-24 22:11:22.632536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.632563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.632589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.632607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.632625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.632653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.632671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.632694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.632713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.632729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.632747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.632763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.632781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.632797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.632815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.632832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.540 [2024-04-24 22:11:22.632849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.540 [2024-04-24 22:11:22.632865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.632882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.632898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.632915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.632931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.632948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.632964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.632981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.632997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.633014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.633029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.633048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.633064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.633082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.633098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.633120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.633136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.633154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.633170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.633187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.633203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.633221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.633237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.633254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.633269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.633287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.633303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.633320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.633336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.633354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.633369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.633387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.633429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.633449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.633464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.633482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.633497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.633514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.633530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.633548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.633568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.633586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.633602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.633621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.633638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.633655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.633682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.633699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.633715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.633733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.633748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.633766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.633781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.633799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.633814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.633832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.633847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.633865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.633880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.633898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.633914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.633931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.633947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.633964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.633980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.634001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.634018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.634035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.634051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.634069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.634085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.634102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.634118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.634136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.634152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.634171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.634191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.634209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.634225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.541 [2024-04-24 22:11:22.634243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.541 [2024-04-24 22:11:22.634259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.634277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.634293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.634310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.634326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.634344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.634359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.634377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.634401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.634421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.634453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.634473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.634489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.634507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.634524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.634542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.634558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.634575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.634591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.634609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.634624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.634642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.634668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.634686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.634703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.634722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.634738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.634757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.634774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.634792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.634808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.634826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e9a0 is same with the state(5) to be set 00:19:40.542 [2024-04-24 22:11:22.634910] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x239e9a0 was disconnected and freed. reset controller. 00:19:40.542 [2024-04-24 22:11:22.635520] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:40.542 [2024-04-24 22:11:22.635817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.542 [2024-04-24 22:11:22.636022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.542 [2024-04-24 22:11:22.636052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24883e0 with addr=10.0.0.2, port=4420 00:19:40.542 [2024-04-24 22:11:22.636080] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24883e0 is same with the state(5) to be set 00:19:40.542 [2024-04-24 22:11:22.636242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.542 [2024-04-24 22:11:22.636413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.542 [2024-04-24 22:11:22.636451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2307200 with addr=10.0.0.2, port=4420 00:19:40.542 [2024-04-24 22:11:22.636469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2307200 is same with the state(5) to be set 00:19:40.542 [2024-04-24 22:11:22.636591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.636618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.636653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.636672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.636690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.636705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.636723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.636740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.636757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.636772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.636790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.636805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.636823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.636839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.636857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.636873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.636890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.636906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.636924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.636939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.636957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.636978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.636997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.637015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.637033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.637049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.637070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.637088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.637106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.637124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.637142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.637157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.637178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.637193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.637211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.637226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.637244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.637259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.542 [2024-04-24 22:11:22.637277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.542 [2024-04-24 22:11:22.637293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.637313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.637330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.637347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.637365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.637384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.637408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.637445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.637461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.637479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.637495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.637512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.637527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.637545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.637560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.637579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.637596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.637614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.637630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.637657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.637673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.637690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.637706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.637724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.637739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.637757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.637773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.637790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.637806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.637823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.637839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.637857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.637877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.637894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.637911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.637928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.637945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.637962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.637977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.637996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.638011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.638029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.638045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.638063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.638078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.638096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.638111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.638128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.638144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.638161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.638184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.638202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.638218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.638236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.638251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.638268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.638283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.638305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.638321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.638338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.638353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.638370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.638386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.638410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.638427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.638447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.638463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.638480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.638496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.638513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.638528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.638545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.638560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.638578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.638593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.543 [2024-04-24 22:11:22.638610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.543 [2024-04-24 22:11:22.638625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.638642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.638666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.638683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.638698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.638716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.638741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.638760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.638775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.638793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.638808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.638825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.638840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.638857] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2411570 is same with the state(5) to be set 00:19:40.544 [2024-04-24 22:11:22.641854] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:40.544 [2024-04-24 22:11:22.641960] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:40.544 [2024-04-24 22:11:22.642044] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:40.544 [2024-04-24 22:11:22.642176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.642202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.642227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.642245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.642263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.642280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.642298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.642313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.642331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.642352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.642369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.642385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.642412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.642439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.642456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.642479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.642497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.642515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.642533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.642548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.642565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.642581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.642599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.642614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.642632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.642657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.642674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.642690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.642707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.642723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.642740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.642756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.642773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.642789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.642806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.642821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.642839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.642855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.642872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.642887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.642909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.642925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.642943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.642958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.642975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.642991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.643008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.643023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.544 [2024-04-24 22:11:22.643041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.544 [2024-04-24 22:11:22.643056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.643074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.643100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.643117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.643133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.643150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.643165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.643182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.643199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.643216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.643231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.643248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.643264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.643281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.643296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.643314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.643334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.643352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.643367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.643384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.643413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.643432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.643448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.643465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.643481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.643499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.643514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.643531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.643547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.643564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.643579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.643597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.643612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.643629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.643645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.643662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.643678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.643695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.643710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.643727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.643743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.643765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.643781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.643799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.643814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.643831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.643847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.643864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.643880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.643897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.643912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.643941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.643958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.643976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.643992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.644009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.644025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.644042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.644058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.644075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.644091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.644108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.644123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.644140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.644158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.644176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.644196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.644214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.644231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.644248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.644264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.644281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.644297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.644314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.644330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.644348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.644363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.644381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.545 [2024-04-24 22:11:22.644403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.545 [2024-04-24 22:11:22.644422] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408430 is same with the state(5) to be set 00:19:40.545 [2024-04-24 22:11:22.646746] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:19:40.546 [2024-04-24 22:11:22.646783] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:19:40.546 [2024-04-24 22:11:22.646804] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:19:40.546 [2024-04-24 22:11:22.647078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.546 [2024-04-24 22:11:22.647245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.546 [2024-04-24 22:11:22.647274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488b10 with addr=10.0.0.2, port=4420 00:19:40.546 [2024-04-24 22:11:22.647293] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488b10 is same with the state(5) to be set 00:19:40.546 [2024-04-24 22:11:22.647454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.546 [2024-04-24 22:11:22.647587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.546 [2024-04-24 22:11:22.647614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec9190 with addr=10.0.0.2, port=4420 00:19:40.546 [2024-04-24 22:11:22.647632] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9190 is same with the state(5) to be set 00:19:40.546 [2024-04-24 22:11:22.647658] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24883e0 (9): Bad file descriptor 00:19:40.546 [2024-04-24 22:11:22.647682] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2307200 (9): Bad file descriptor 00:19:40.546 [2024-04-24 22:11:22.647766] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:40.546 [2024-04-24 22:11:22.647801] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:40.546 [2024-04-24 22:11:22.647833] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec9190 (9): Bad file descriptor 00:19:40.546 [2024-04-24 22:11:22.647861] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488b10 (9): Bad file descriptor 00:19:40.546 [2024-04-24 22:11:22.648529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.546 [2024-04-24 22:11:22.648672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.546 [2024-04-24 22:11:22.648700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d9e40 with addr=10.0.0.2, port=4420 00:19:40.546 [2024-04-24 22:11:22.648724] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9e40 is same with the state(5) to be set 00:19:40.546 [2024-04-24 22:11:22.648890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.546 [2024-04-24 22:11:22.649086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.546 [2024-04-24 22:11:22.649115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ea190 with addr=10.0.0.2, port=4420 00:19:40.546 [2024-04-24 22:11:22.649134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ea190 is same with the state(5) to be set 00:19:40.546 [2024-04-24 22:11:22.649294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.546 [2024-04-24 22:11:22.649472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.546 [2024-04-24 22:11:22.649500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24885f0 with addr=10.0.0.2, port=4420 00:19:40.546 [2024-04-24 22:11:22.649517] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24885f0 is same with the state(5) to be set 00:19:40.546 [2024-04-24 22:11:22.649538] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:19:40.546 [2024-04-24 22:11:22.649555] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:19:40.546 [2024-04-24 22:11:22.649573] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:19:40.546 [2024-04-24 22:11:22.649597] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:19:40.546 [2024-04-24 22:11:22.649613] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:19:40.546 [2024-04-24 22:11:22.649628] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:19:40.546 [2024-04-24 22:11:22.650317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.546 [2024-04-24 22:11:22.650345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.546 [2024-04-24 22:11:22.650374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.546 [2024-04-24 22:11:22.650402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.546 [2024-04-24 22:11:22.650425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.546 [2024-04-24 22:11:22.650451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.546 [2024-04-24 22:11:22.650469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.546 [2024-04-24 22:11:22.650485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.546 [2024-04-24 22:11:22.650508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.546 [2024-04-24 22:11:22.650525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.546 [2024-04-24 22:11:22.650543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.546 [2024-04-24 22:11:22.650559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.546 [2024-04-24 22:11:22.650576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.546 [2024-04-24 22:11:22.650592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.546 [2024-04-24 22:11:22.650610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.546 [2024-04-24 22:11:22.650626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.546 [2024-04-24 22:11:22.650644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.546 [2024-04-24 22:11:22.650668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.546 [2024-04-24 22:11:22.650686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.546 [2024-04-24 22:11:22.650702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.546 [2024-04-24 22:11:22.650720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.546 [2024-04-24 22:11:22.650736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.546 [2024-04-24 22:11:22.650753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.546 [2024-04-24 22:11:22.650769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.546 [2024-04-24 22:11:22.650787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.546 [2024-04-24 22:11:22.650803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.546 [2024-04-24 22:11:22.650821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.546 [2024-04-24 22:11:22.650837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.546 [2024-04-24 22:11:22.650854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.546 [2024-04-24 22:11:22.650869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.546 [2024-04-24 22:11:22.650886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.546 [2024-04-24 22:11:22.650902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.546 [2024-04-24 22:11:22.650919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.546 [2024-04-24 22:11:22.650939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.546 [2024-04-24 22:11:22.650957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.546 [2024-04-24 22:11:22.650973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.546 [2024-04-24 22:11:22.650990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.546 [2024-04-24 22:11:22.651006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.546 [2024-04-24 22:11:22.651023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.546 [2024-04-24 22:11:22.651038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.546 [2024-04-24 22:11:22.651055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.546 [2024-04-24 22:11:22.651071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.546 [2024-04-24 22:11:22.651088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.546 [2024-04-24 22:11:22.651103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.546 [2024-04-24 22:11:22.651121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.546 [2024-04-24 22:11:22.651136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.546 [2024-04-24 22:11:22.651157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.546 [2024-04-24 22:11:22.651172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.546 [2024-04-24 22:11:22.651190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.546 [2024-04-24 22:11:22.651206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.546 [2024-04-24 22:11:22.651223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.651239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.651257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.651273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.651290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.651305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.651323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.651339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.651356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.651376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.651403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.651421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.651439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.651455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.651473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.651488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.651506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.651521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.651539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.651554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.651571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.651587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.651604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.651619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.651637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.651652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.651669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.651685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.651702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.651718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.651736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.651752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.651769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.651785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.651807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.651824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.651841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.651857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.651874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.651889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.651907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.651923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.651940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.651955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.651973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.651988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.652005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.652021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.652038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.652054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.652071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.652093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.652110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.652126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.652144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.652159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.652176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.652192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.652209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.652228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.652246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.652262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.652280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.652297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.652314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.652330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.652347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.652362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.652379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.652401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.652422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.652446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.652463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.652478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.652495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.652511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.652528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.547 [2024-04-24 22:11:22.652544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.547 [2024-04-24 22:11:22.652560] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239fe50 is same with the state(5) to be set 00:19:40.548 [2024-04-24 22:11:22.654006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.654033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.654056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.654073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.654090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.654111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.654130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.654146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.654163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.654179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.654197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.654212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.654230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.654245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.654263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.654279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.654297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.654313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.654330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.654345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.654363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.654378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.654404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.654422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.654444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.654460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.654478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.654493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.654511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.654526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.654548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.654565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.654583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.654598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.654615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.654630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.654648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.654663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.654690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.654706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.654723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.654739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.654757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.654772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.654790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.654805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.654823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.654838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.654856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.654871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.654888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.654903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.654921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.654936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.654954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.654973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.654991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.655008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.655025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.655041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.548 [2024-04-24 22:11:22.655058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.548 [2024-04-24 22:11:22.655073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.655090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.655105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.655123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.655138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.655155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.655170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.655187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.655203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.655220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.655235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.655252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.655268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.655286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.655302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.655319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.655334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.655352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.655368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.655389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.655422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.655449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.655465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.655482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.655498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.655516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.655531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.655549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.655564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.655581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.655597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.655614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.655629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.655647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.655671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.655688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.655703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.655721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.655736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.655753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.655769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.655786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.655802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.655819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.655839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.655857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.655873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.655891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.655906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.655924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.655939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.655956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.655972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.655989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.656004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.656021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.656037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.656054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.656070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.656087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.656104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.656122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.656138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.656156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.656171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.656189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.656204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.656220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3dd0 is same with the state(5) to be set 00:19:40.549 [2024-04-24 22:11:22.657633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.657665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.657689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.657707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.657725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.657742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.657759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.657775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.657793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.549 [2024-04-24 22:11:22.657809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.549 [2024-04-24 22:11:22.657826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.657842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.657860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.657875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.657893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.657909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.657926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.657942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.657960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.657977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.657994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.658010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.658028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.658044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.658061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.658077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.658099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.658116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.658133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.658149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.658166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.658192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.658209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.658225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.658243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.658258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.658277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.658292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.658310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.658325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.658343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.658358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.658376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.658391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.658442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.658459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.658477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.658493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.658511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.658526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.658544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.658564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.658582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.658598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.658615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.658631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.658649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.658664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.658689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.658705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.658722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.658738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.658755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.658772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.658789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.658805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.658822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.658837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.658854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.658870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.658888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.658903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.658920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.658937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.658954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.658970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.658995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.659012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.659030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.659045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.659062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.659078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.659095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.659110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.659128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.659143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.659161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.659176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.659194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.659209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.659226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.659241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.550 [2024-04-24 22:11:22.659259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.550 [2024-04-24 22:11:22.659273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.551 [2024-04-24 22:11:22.659291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.551 [2024-04-24 22:11:22.659306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.551 [2024-04-24 22:11:22.659323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.551 [2024-04-24 22:11:22.659339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.551 [2024-04-24 22:11:22.659356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.551 [2024-04-24 22:11:22.659371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.551 [2024-04-24 22:11:22.659388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.551 [2024-04-24 22:11:22.659417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.551 [2024-04-24 22:11:22.659449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.551 [2024-04-24 22:11:22.659464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.551 [2024-04-24 22:11:22.659482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.551 [2024-04-24 22:11:22.659498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.551 [2024-04-24 22:11:22.659516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.551 [2024-04-24 22:11:22.659532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.551 [2024-04-24 22:11:22.659549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.551 [2024-04-24 22:11:22.659565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.551 [2024-04-24 22:11:22.659582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.551 [2024-04-24 22:11:22.659598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.551 [2024-04-24 22:11:22.659615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.551 [2024-04-24 22:11:22.659630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.551 [2024-04-24 22:11:22.659647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.551 [2024-04-24 22:11:22.659663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.551 [2024-04-24 22:11:22.659680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.551 [2024-04-24 22:11:22.659695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.551 [2024-04-24 22:11:22.659712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.551 [2024-04-24 22:11:22.659728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.551 [2024-04-24 22:11:22.659745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.551 [2024-04-24 22:11:22.659760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.551 [2024-04-24 22:11:22.659777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.551 [2024-04-24 22:11:22.659795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.551 [2024-04-24 22:11:22.659813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.551 [2024-04-24 22:11:22.659828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.551 [2024-04-24 22:11:22.659846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.551 [2024-04-24 22:11:22.659866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.551 [2024-04-24 22:11:22.659883] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5280 is same with the state(5) to be set 00:19:40.551 [2024-04-24 22:11:22.661965] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:40.551 [2024-04-24 22:11:22.661995] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:40.551 [2024-04-24 22:11:22.662016] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:19:40.551 [2024-04-24 22:11:22.662038] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:19:40.551 task offset: 16384 on job bdev=Nvme9n1 fails 00:19:40.551 00:19:40.551 Latency(us) 00:19:40.551 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.551 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:40.551 Job: Nvme1n1 ended in about 0.79 seconds with error 00:19:40.551 Verification LBA range: start 0x0 length 0x400 00:19:40.551 Nvme1n1 : 0.79 161.73 10.11 80.86 0.00 260126.59 13495.56 257872.02 00:19:40.551 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:40.551 Job: Nvme2n1 ended in about 0.80 seconds with error 00:19:40.551 Verification LBA range: start 0x0 length 0x400 00:19:40.551 Nvme2n1 : 0.80 165.19 10.32 80.09 0.00 250721.52 25437.68 264085.81 00:19:40.551 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:40.551 Job: Nvme3n1 ended in about 0.79 seconds with error 00:19:40.551 Verification LBA range: start 0x0 length 0x400 00:19:40.551 Nvme3n1 : 0.79 162.19 10.14 81.09 0.00 245899.57 22719.15 299815.06 00:19:40.551 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:40.551 Job: Nvme4n1 ended in about 0.80 seconds with error 00:19:40.551 Verification LBA range: start 0x0 length 0x400 00:19:40.551 Nvme4n1 : 0.80 159.86 9.99 79.93 0.00 243046.78 18544.26 278066.82 00:19:40.551 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:40.551 Job: Nvme5n1 ended in about 0.81 seconds with error 00:19:40.551 Verification LBA range: start 0x0 length 0x400 00:19:40.551 Nvme5n1 : 0.81 78.74 4.92 78.74 0.00 360704.76 22816.24 299815.06 00:19:40.551 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:40.551 Job: Nvme6n1 ended in about 0.82 seconds with error 00:19:40.551 Verification LBA range: start 0x0 length 0x400 00:19:40.551 Nvme6n1 : 0.82 78.39 4.90 78.39 0.00 352676.60 23592.96 307582.29 00:19:40.551 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:40.551 Job: Nvme7n1 ended in about 0.82 seconds with error 00:19:40.551 Verification LBA range: start 0x0 length 0x400 00:19:40.551 Nvme7n1 : 0.82 156.09 9.76 78.04 0.00 229605.33 18738.44 285834.05 00:19:40.551 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:40.551 Job: Nvme8n1 ended in about 0.79 seconds with error 00:19:40.551 Verification LBA range: start 0x0 length 0x400 00:19:40.551 Nvme8n1 : 0.79 162.97 10.19 81.49 0.00 211609.47 12427.57 282727.16 00:19:40.551 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:40.551 Job: Nvme9n1 ended in about 0.78 seconds with error 00:19:40.551 Verification LBA range: start 0x0 length 0x400 00:19:40.551 Nvme9n1 : 0.78 163.86 10.24 81.93 0.00 203676.89 30292.20 262532.36 00:19:40.551 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:40.551 Job: Nvme10n1 ended in about 0.80 seconds with error 00:19:40.551 Verification LBA range: start 0x0 length 0x400 00:19:40.551 Nvme10n1 : 0.80 79.54 4.97 79.54 0.00 307647.91 21554.06 313796.08 00:19:40.551 =================================================================================================================== 00:19:40.551 Total : 1368.57 85.54 800.12 0.00 258357.03 12427.57 313796.08 00:19:40.551 [2024-04-24 22:11:22.690610] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:40.551 [2024-04-24 22:11:22.690709] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:19:40.551 [2024-04-24 22:11:22.690822] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d9e40 (9): Bad file descriptor 00:19:40.551 [2024-04-24 22:11:22.690856] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ea190 (9): Bad file descriptor 00:19:40.551 [2024-04-24 22:11:22.690878] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24885f0 (9): Bad file descriptor 00:19:40.551 [2024-04-24 22:11:22.690897] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:19:40.551 [2024-04-24 22:11:22.690914] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:19:40.551 [2024-04-24 22:11:22.690932] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:19:40.551 [2024-04-24 22:11:22.690963] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:40.551 [2024-04-24 22:11:22.690980] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:40.551 [2024-04-24 22:11:22.690995] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:40.551 [2024-04-24 22:11:22.691082] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:40.551 [2024-04-24 22:11:22.691111] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:40.551 [2024-04-24 22:11:22.691133] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:40.551 [2024-04-24 22:11:22.691154] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:40.551 [2024-04-24 22:11:22.691174] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:40.551 [2024-04-24 22:11:22.691346] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:40.551 [2024-04-24 22:11:22.691374] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:40.551 [2024-04-24 22:11:22.691684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.551 [2024-04-24 22:11:22.691904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.551 [2024-04-24 22:11:22.691934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x231b830 with addr=10.0.0.2, port=4420 00:19:40.551 [2024-04-24 22:11:22.691957] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231b830 is same with the state(5) to be set 00:19:40.551 [2024-04-24 22:11:22.692150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.551 [2024-04-24 22:11:22.692300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.551 [2024-04-24 22:11:22.692331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x230a480 with addr=10.0.0.2, port=4420 00:19:40.551 [2024-04-24 22:11:22.692350] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230a480 is same with the state(5) to be set 00:19:40.551 [2024-04-24 22:11:22.692505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.552 [2024-04-24 22:11:22.692665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.552 [2024-04-24 22:11:22.692694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2492860 with addr=10.0.0.2, port=4420 00:19:40.552 [2024-04-24 22:11:22.692712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492860 is same with the state(5) to be set 00:19:40.552 [2024-04-24 22:11:22.692741] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:40.552 [2024-04-24 22:11:22.692756] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:19:40.552 [2024-04-24 22:11:22.692771] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:40.552 [2024-04-24 22:11:22.692792] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:19:40.552 [2024-04-24 22:11:22.692808] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:19:40.552 [2024-04-24 22:11:22.692823] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:19:40.552 [2024-04-24 22:11:22.692842] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:19:40.552 [2024-04-24 22:11:22.692858] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:19:40.552 [2024-04-24 22:11:22.692872] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:19:40.552 [2024-04-24 22:11:22.692912] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:40.552 [2024-04-24 22:11:22.692938] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:40.552 [2024-04-24 22:11:22.692970] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:40.552 [2024-04-24 22:11:22.692992] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:40.552 [2024-04-24 22:11:22.693013] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:40.552 [2024-04-24 22:11:22.693979] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:19:40.552 [2024-04-24 22:11:22.694010] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:19:40.552 [2024-04-24 22:11:22.694055] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:40.552 [2024-04-24 22:11:22.694078] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:40.552 [2024-04-24 22:11:22.694091] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:40.552 [2024-04-24 22:11:22.694136] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231b830 (9): Bad file descriptor 00:19:40.552 [2024-04-24 22:11:22.694162] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230a480 (9): Bad file descriptor 00:19:40.552 [2024-04-24 22:11:22.694183] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2492860 (9): Bad file descriptor 00:19:40.552 [2024-04-24 22:11:22.694258] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:40.552 [2024-04-24 22:11:22.694286] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:19:40.552 [2024-04-24 22:11:22.694493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.552 [2024-04-24 22:11:22.694663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.552 [2024-04-24 22:11:22.694695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2307200 with addr=10.0.0.2, port=4420 00:19:40.552 [2024-04-24 22:11:22.694713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2307200 is same with the state(5) to be set 00:19:40.552 [2024-04-24 22:11:22.694863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.552 [2024-04-24 22:11:22.695035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.552 [2024-04-24 22:11:22.695063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24883e0 with addr=10.0.0.2, port=4420 00:19:40.552 [2024-04-24 22:11:22.695087] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24883e0 is same with the state(5) to be set 00:19:40.552 [2024-04-24 22:11:22.695104] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:19:40.552 [2024-04-24 22:11:22.695119] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:19:40.552 [2024-04-24 22:11:22.695133] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:19:40.552 [2024-04-24 22:11:22.695153] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:19:40.552 [2024-04-24 22:11:22.695170] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:19:40.552 [2024-04-24 22:11:22.695184] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:19:40.552 [2024-04-24 22:11:22.695201] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:19:40.552 [2024-04-24 22:11:22.695217] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:19:40.552 [2024-04-24 22:11:22.695243] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:19:40.552 [2024-04-24 22:11:22.695311] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:40.552 [2024-04-24 22:11:22.695333] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:40.552 [2024-04-24 22:11:22.695347] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:40.552 [2024-04-24 22:11:22.695491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.552 [2024-04-24 22:11:22.695637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.552 [2024-04-24 22:11:22.695666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec9190 with addr=10.0.0.2, port=4420 00:19:40.552 [2024-04-24 22:11:22.695685] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9190 is same with the state(5) to be set 00:19:40.552 [2024-04-24 22:11:22.695868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.552 [2024-04-24 22:11:22.696045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.552 [2024-04-24 22:11:22.696073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2488b10 with addr=10.0.0.2, port=4420 00:19:40.552 [2024-04-24 22:11:22.696091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2488b10 is same with the state(5) to be set 00:19:40.552 [2024-04-24 22:11:22.696112] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2307200 (9): Bad file descriptor 00:19:40.552 [2024-04-24 22:11:22.696133] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24883e0 (9): Bad file descriptor 00:19:40.552 [2024-04-24 22:11:22.696183] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec9190 (9): Bad file descriptor 00:19:40.552 [2024-04-24 22:11:22.696211] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2488b10 (9): Bad file descriptor 00:19:40.552 [2024-04-24 22:11:22.696229] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:19:40.552 [2024-04-24 22:11:22.696244] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:19:40.552 [2024-04-24 22:11:22.696259] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:19:40.552 [2024-04-24 22:11:22.696277] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:19:40.552 [2024-04-24 22:11:22.696294] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:19:40.552 [2024-04-24 22:11:22.696313] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:19:40.552 [2024-04-24 22:11:22.696373] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:40.552 [2024-04-24 22:11:22.696404] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:40.552 [2024-04-24 22:11:22.696422] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:40.552 [2024-04-24 22:11:22.696445] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:40.552 [2024-04-24 22:11:22.696459] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:40.552 [2024-04-24 22:11:22.696478] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:19:40.552 [2024-04-24 22:11:22.696493] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:19:40.552 [2024-04-24 22:11:22.696508] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:19:40.552 [2024-04-24 22:11:22.696549] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:40.552 [2024-04-24 22:11:22.696569] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:41.128 22:11:23 -- target/shutdown.sh@136 -- # nvmfpid= 00:19:41.128 22:11:23 -- target/shutdown.sh@139 -- # sleep 1 00:19:42.061 22:11:24 -- target/shutdown.sh@142 -- # kill -9 3982743 00:19:42.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3982743) - No such process 00:19:42.061 22:11:24 -- target/shutdown.sh@142 -- # true 00:19:42.061 22:11:24 -- target/shutdown.sh@144 -- # stoptarget 00:19:42.061 22:11:24 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:42.061 22:11:24 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:42.061 22:11:24 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:42.061 22:11:24 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:42.061 22:11:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:42.061 22:11:24 -- nvmf/common.sh@117 -- # sync 00:19:42.062 22:11:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:42.062 22:11:24 -- nvmf/common.sh@120 -- # set +e 00:19:42.062 22:11:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:42.062 22:11:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:42.062 rmmod nvme_tcp 00:19:42.062 rmmod nvme_fabrics 00:19:42.062 rmmod nvme_keyring 00:19:42.062 22:11:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:42.062 22:11:24 -- nvmf/common.sh@124 -- # set -e 00:19:42.062 22:11:24 -- nvmf/common.sh@125 -- # return 0 00:19:42.062 22:11:24 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:19:42.062 22:11:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:42.062 22:11:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:42.062 22:11:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:42.062 22:11:24 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:42.062 22:11:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:42.062 22:11:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.062 22:11:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:42.062 22:11:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.592 22:11:26 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:44.592 00:19:44.592 real 0m7.444s 00:19:44.592 user 0m17.793s 00:19:44.592 sys 0m1.466s 00:19:44.592 22:11:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:44.592 22:11:26 -- common/autotest_common.sh@10 -- # set +x 00:19:44.592 ************************************ 00:19:44.592 END TEST nvmf_shutdown_tc3 00:19:44.592 ************************************ 00:19:44.592 22:11:26 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:19:44.592 00:19:44.592 real 0m29.070s 00:19:44.592 user 1m21.519s 00:19:44.592 sys 0m6.866s 00:19:44.592 22:11:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:44.592 22:11:26 -- common/autotest_common.sh@10 -- # set +x 00:19:44.592 ************************************ 00:19:44.592 END TEST nvmf_shutdown 00:19:44.592 ************************************ 00:19:44.592 22:11:26 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:19:44.592 22:11:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:44.592 22:11:26 -- common/autotest_common.sh@10 -- # set +x 00:19:44.592 22:11:26 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:19:44.592 22:11:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:44.592 22:11:26 -- common/autotest_common.sh@10 -- # set +x 00:19:44.592 22:11:26 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:19:44.592 22:11:26 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:44.592 22:11:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:44.592 22:11:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:44.592 22:11:26 -- common/autotest_common.sh@10 -- # set +x 00:19:44.592 ************************************ 00:19:44.592 START TEST nvmf_multicontroller 00:19:44.592 ************************************ 00:19:44.592 22:11:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:44.592 * Looking for test storage... 00:19:44.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:44.592 22:11:26 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:44.592 22:11:26 -- nvmf/common.sh@7 -- # uname -s 00:19:44.592 22:11:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:44.592 22:11:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:44.592 22:11:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:44.592 22:11:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:44.592 22:11:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:44.592 22:11:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:44.592 22:11:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:44.592 22:11:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:44.592 22:11:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:44.592 22:11:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:44.592 22:11:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:44.592 22:11:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:44.592 22:11:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:44.592 22:11:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:44.592 22:11:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:44.592 22:11:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:44.592 22:11:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:44.592 22:11:26 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:44.592 22:11:26 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:44.592 22:11:26 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:44.592 22:11:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.592 22:11:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.592 22:11:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.592 22:11:26 -- paths/export.sh@5 -- # export PATH 00:19:44.592 22:11:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.592 22:11:26 -- nvmf/common.sh@47 -- # : 0 00:19:44.592 22:11:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:44.592 22:11:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:44.592 22:11:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:44.592 22:11:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:44.592 22:11:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:44.592 22:11:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:44.592 22:11:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:44.592 22:11:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:44.592 22:11:26 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:44.592 22:11:26 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:44.592 22:11:26 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:44.592 22:11:26 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:44.592 22:11:26 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:44.592 22:11:26 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:19:44.592 22:11:26 -- host/multicontroller.sh@23 -- # nvmftestinit 00:19:44.592 22:11:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:44.592 22:11:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:44.592 22:11:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:44.592 22:11:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:44.592 22:11:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:44.592 22:11:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.592 22:11:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:44.592 22:11:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.592 22:11:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:44.592 22:11:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:44.592 22:11:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:44.592 22:11:26 -- common/autotest_common.sh@10 -- # set +x 00:19:47.124 22:11:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:47.124 22:11:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:47.124 22:11:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:47.124 22:11:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:47.124 22:11:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:47.124 22:11:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:47.124 22:11:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:47.124 22:11:28 -- nvmf/common.sh@295 -- # net_devs=() 00:19:47.124 22:11:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:47.124 22:11:28 -- nvmf/common.sh@296 -- # e810=() 00:19:47.124 22:11:28 -- nvmf/common.sh@296 -- # local -ga e810 00:19:47.124 22:11:28 -- nvmf/common.sh@297 -- # x722=() 00:19:47.124 22:11:28 -- nvmf/common.sh@297 -- # local -ga x722 00:19:47.124 22:11:28 -- nvmf/common.sh@298 -- # mlx=() 00:19:47.124 22:11:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:47.124 22:11:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:47.124 22:11:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:47.124 22:11:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:47.124 22:11:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:47.124 22:11:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:47.124 22:11:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:47.124 22:11:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:47.124 22:11:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:47.124 22:11:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:47.124 22:11:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:47.124 22:11:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:47.124 22:11:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:47.124 22:11:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:47.124 22:11:28 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:47.124 22:11:28 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:47.124 22:11:28 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:47.124 22:11:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:47.124 22:11:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:47.124 22:11:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:47.124 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:47.124 22:11:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:47.124 22:11:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:47.124 22:11:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.124 22:11:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.124 22:11:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:47.124 22:11:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:47.124 22:11:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:47.124 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:47.124 22:11:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:47.124 22:11:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:47.124 22:11:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.124 22:11:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.124 22:11:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:47.124 22:11:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:47.124 22:11:28 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:47.124 22:11:28 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:47.124 22:11:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:47.124 22:11:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.124 22:11:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:47.124 22:11:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.124 22:11:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:47.124 Found net devices under 0000:84:00.0: cvl_0_0 00:19:47.124 22:11:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.124 22:11:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:47.124 22:11:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.124 22:11:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:47.124 22:11:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.124 22:11:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:47.124 Found net devices under 0000:84:00.1: cvl_0_1 00:19:47.124 22:11:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.124 22:11:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:47.124 22:11:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:47.124 22:11:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:47.124 22:11:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:47.124 22:11:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:47.124 22:11:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.124 22:11:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:47.124 22:11:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:47.124 22:11:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:47.124 22:11:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:47.124 22:11:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:47.124 22:11:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:47.124 22:11:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:47.124 22:11:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.124 22:11:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:47.124 22:11:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:47.124 22:11:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:47.124 22:11:28 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:47.124 22:11:28 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:47.124 22:11:28 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:47.124 22:11:28 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:47.124 22:11:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:47.124 22:11:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:47.124 22:11:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:47.124 22:11:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:47.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:47.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:19:47.124 00:19:47.124 --- 10.0.0.2 ping statistics --- 00:19:47.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.124 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:19:47.124 22:11:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:47.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:47.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:19:47.124 00:19:47.124 --- 10.0.0.1 ping statistics --- 00:19:47.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.124 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:19:47.124 22:11:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:47.124 22:11:29 -- nvmf/common.sh@411 -- # return 0 00:19:47.124 22:11:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:47.124 22:11:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:47.124 22:11:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:47.124 22:11:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:47.124 22:11:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:47.124 22:11:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:47.124 22:11:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:47.124 22:11:29 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:19:47.124 22:11:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:47.124 22:11:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:47.124 22:11:29 -- common/autotest_common.sh@10 -- # set +x 00:19:47.124 22:11:29 -- nvmf/common.sh@470 -- # nvmfpid=3985284 00:19:47.124 22:11:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:47.124 22:11:29 -- nvmf/common.sh@471 -- # waitforlisten 3985284 00:19:47.124 22:11:29 -- common/autotest_common.sh@817 -- # '[' -z 3985284 ']' 00:19:47.124 22:11:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.124 22:11:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:47.124 22:11:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.124 22:11:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:47.124 22:11:29 -- common/autotest_common.sh@10 -- # set +x 00:19:47.124 [2024-04-24 22:11:29.146040] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:19:47.124 [2024-04-24 22:11:29.146141] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:47.124 EAL: No free 2048 kB hugepages reported on node 1 00:19:47.124 [2024-04-24 22:11:29.231562] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:47.124 [2024-04-24 22:11:29.355769] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:47.125 [2024-04-24 22:11:29.355831] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:47.125 [2024-04-24 22:11:29.355849] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:47.125 [2024-04-24 22:11:29.355863] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:47.125 [2024-04-24 22:11:29.355875] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:47.125 [2024-04-24 22:11:29.355971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.125 [2024-04-24 22:11:29.356024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:47.125 [2024-04-24 22:11:29.356027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.383 22:11:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:47.383 22:11:29 -- common/autotest_common.sh@850 -- # return 0 00:19:47.383 22:11:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:47.383 22:11:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:47.383 22:11:29 -- common/autotest_common.sh@10 -- # set +x 00:19:47.383 22:11:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.383 22:11:29 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:47.383 22:11:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.383 22:11:29 -- common/autotest_common.sh@10 -- # set +x 00:19:47.383 [2024-04-24 22:11:29.508000] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.383 22:11:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.383 22:11:29 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:47.383 22:11:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.383 22:11:29 -- common/autotest_common.sh@10 -- # set +x 00:19:47.383 Malloc0 00:19:47.383 22:11:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.383 22:11:29 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:47.383 22:11:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.383 22:11:29 -- common/autotest_common.sh@10 -- # set +x 00:19:47.383 22:11:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.383 22:11:29 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:47.383 22:11:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.383 22:11:29 -- common/autotest_common.sh@10 -- # set +x 00:19:47.383 22:11:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.384 22:11:29 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:47.384 22:11:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.384 22:11:29 -- common/autotest_common.sh@10 -- # set +x 00:19:47.384 [2024-04-24 22:11:29.572640] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:47.384 [2024-04-24 22:11:29.572954] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.384 22:11:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.384 22:11:29 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:47.384 22:11:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.384 22:11:29 -- common/autotest_common.sh@10 -- # set +x 00:19:47.384 [2024-04-24 22:11:29.580792] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:47.384 22:11:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.384 22:11:29 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:47.384 22:11:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.384 22:11:29 -- common/autotest_common.sh@10 -- # set +x 00:19:47.384 Malloc1 00:19:47.384 22:11:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.384 22:11:29 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:47.384 22:11:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.384 22:11:29 -- common/autotest_common.sh@10 -- # set +x 00:19:47.384 22:11:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.384 22:11:29 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:19:47.384 22:11:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.384 22:11:29 -- common/autotest_common.sh@10 -- # set +x 00:19:47.384 22:11:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.384 22:11:29 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:47.384 22:11:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.384 22:11:29 -- common/autotest_common.sh@10 -- # set +x 00:19:47.384 22:11:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.384 22:11:29 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:19:47.384 22:11:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.384 22:11:29 -- common/autotest_common.sh@10 -- # set +x 00:19:47.642 22:11:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.642 22:11:29 -- host/multicontroller.sh@44 -- # bdevperf_pid=3985384 00:19:47.642 22:11:29 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:19:47.642 22:11:29 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:47.642 22:11:29 -- host/multicontroller.sh@47 -- # waitforlisten 3985384 /var/tmp/bdevperf.sock 00:19:47.642 22:11:29 -- common/autotest_common.sh@817 -- # '[' -z 3985384 ']' 00:19:47.642 22:11:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:47.642 22:11:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:47.642 22:11:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:47.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:47.642 22:11:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:47.642 22:11:29 -- common/autotest_common.sh@10 -- # set +x 00:19:47.902 22:11:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:47.902 22:11:29 -- common/autotest_common.sh@850 -- # return 0 00:19:47.902 22:11:29 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:47.902 22:11:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.902 22:11:29 -- common/autotest_common.sh@10 -- # set +x 00:19:47.902 NVMe0n1 00:19:47.902 22:11:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.902 22:11:30 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:47.902 22:11:30 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:19:47.902 22:11:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.902 22:11:30 -- common/autotest_common.sh@10 -- # set +x 00:19:47.902 22:11:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.902 1 00:19:47.902 22:11:30 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:47.902 22:11:30 -- common/autotest_common.sh@638 -- # local es=0 00:19:47.902 22:11:30 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:47.902 22:11:30 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:47.902 22:11:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:47.902 22:11:30 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:47.902 22:11:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:47.902 22:11:30 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:47.902 22:11:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.902 22:11:30 -- common/autotest_common.sh@10 -- # set +x 00:19:47.902 request: 00:19:47.902 { 00:19:47.902 "name": "NVMe0", 00:19:47.902 "trtype": "tcp", 00:19:47.902 "traddr": "10.0.0.2", 00:19:47.902 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:19:47.902 "hostaddr": "10.0.0.2", 00:19:47.902 "hostsvcid": "60000", 00:19:47.902 "adrfam": "ipv4", 00:19:47.902 "trsvcid": "4420", 00:19:47.902 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.902 "method": "bdev_nvme_attach_controller", 00:19:47.902 "req_id": 1 00:19:47.902 } 00:19:47.902 Got JSON-RPC error response 00:19:47.902 response: 00:19:47.902 { 00:19:47.902 "code": -114, 00:19:47.902 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:19:47.902 } 00:19:47.902 22:11:30 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:47.902 22:11:30 -- common/autotest_common.sh@641 -- # es=1 00:19:47.902 22:11:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:47.902 22:11:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:47.902 22:11:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:47.902 22:11:30 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:47.902 22:11:30 -- common/autotest_common.sh@638 -- # local es=0 00:19:47.902 22:11:30 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:47.902 22:11:30 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:47.902 22:11:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:47.902 22:11:30 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:47.902 22:11:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:47.902 22:11:30 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:47.902 22:11:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.902 22:11:30 -- common/autotest_common.sh@10 -- # set +x 00:19:47.902 request: 00:19:47.902 { 00:19:47.902 "name": "NVMe0", 00:19:47.902 "trtype": "tcp", 00:19:47.902 "traddr": "10.0.0.2", 00:19:47.902 "hostaddr": "10.0.0.2", 00:19:47.902 "hostsvcid": "60000", 00:19:47.902 "adrfam": "ipv4", 00:19:47.902 "trsvcid": "4420", 00:19:47.902 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:47.902 "method": "bdev_nvme_attach_controller", 00:19:47.902 "req_id": 1 00:19:47.902 } 00:19:47.902 Got JSON-RPC error response 00:19:47.902 response: 00:19:47.902 { 00:19:47.902 "code": -114, 00:19:47.902 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:19:47.902 } 00:19:47.902 22:11:30 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:47.902 22:11:30 -- common/autotest_common.sh@641 -- # es=1 00:19:47.902 22:11:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:47.902 22:11:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:47.902 22:11:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:47.902 22:11:30 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:47.902 22:11:30 -- common/autotest_common.sh@638 -- # local es=0 00:19:47.902 22:11:30 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:47.903 22:11:30 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:47.903 22:11:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:47.903 22:11:30 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:47.903 22:11:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:47.903 22:11:30 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:47.903 22:11:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.903 22:11:30 -- common/autotest_common.sh@10 -- # set +x 00:19:47.903 request: 00:19:47.903 { 00:19:47.903 "name": "NVMe0", 00:19:47.903 "trtype": "tcp", 00:19:47.903 "traddr": "10.0.0.2", 00:19:47.903 "hostaddr": "10.0.0.2", 00:19:47.903 "hostsvcid": "60000", 00:19:47.903 "adrfam": "ipv4", 00:19:47.903 "trsvcid": "4420", 00:19:47.903 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.903 "multipath": "disable", 00:19:47.903 "method": "bdev_nvme_attach_controller", 00:19:47.903 "req_id": 1 00:19:47.903 } 00:19:47.903 Got JSON-RPC error response 00:19:47.903 response: 00:19:47.903 { 00:19:47.903 "code": -114, 00:19:47.903 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:19:47.903 } 00:19:47.903 22:11:30 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:47.903 22:11:30 -- common/autotest_common.sh@641 -- # es=1 00:19:47.903 22:11:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:47.903 22:11:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:47.903 22:11:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:47.903 22:11:30 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:47.903 22:11:30 -- common/autotest_common.sh@638 -- # local es=0 00:19:47.903 22:11:30 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:47.903 22:11:30 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:47.903 22:11:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:47.903 22:11:30 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:47.903 22:11:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:47.903 22:11:30 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:47.903 22:11:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.903 22:11:30 -- common/autotest_common.sh@10 -- # set +x 00:19:47.903 request: 00:19:47.903 { 00:19:47.903 "name": "NVMe0", 00:19:47.903 "trtype": "tcp", 00:19:47.903 "traddr": "10.0.0.2", 00:19:47.903 "hostaddr": "10.0.0.2", 00:19:47.903 "hostsvcid": "60000", 00:19:47.903 "adrfam": "ipv4", 00:19:47.903 "trsvcid": "4420", 00:19:47.903 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.903 "multipath": "failover", 00:19:47.903 "method": "bdev_nvme_attach_controller", 00:19:47.903 "req_id": 1 00:19:47.903 } 00:19:47.903 Got JSON-RPC error response 00:19:47.903 response: 00:19:47.903 { 00:19:47.903 "code": -114, 00:19:47.903 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:19:47.903 } 00:19:47.903 22:11:30 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:47.903 22:11:30 -- common/autotest_common.sh@641 -- # es=1 00:19:47.903 22:11:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:47.903 22:11:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:47.903 22:11:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:47.903 22:11:30 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:47.903 22:11:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.903 22:11:30 -- common/autotest_common.sh@10 -- # set +x 00:19:48.161 00:19:48.161 22:11:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.162 22:11:30 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:48.162 22:11:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.162 22:11:30 -- common/autotest_common.sh@10 -- # set +x 00:19:48.162 22:11:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.162 22:11:30 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:48.162 22:11:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.162 22:11:30 -- common/autotest_common.sh@10 -- # set +x 00:19:48.420 00:19:48.420 22:11:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.420 22:11:30 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:48.420 22:11:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.420 22:11:30 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:19:48.420 22:11:30 -- common/autotest_common.sh@10 -- # set +x 00:19:48.420 22:11:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.420 22:11:30 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:19:48.420 22:11:30 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:49.794 0 00:19:49.794 22:11:31 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:19:49.794 22:11:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.794 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:19:49.794 22:11:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.794 22:11:31 -- host/multicontroller.sh@100 -- # killprocess 3985384 00:19:49.794 22:11:31 -- common/autotest_common.sh@936 -- # '[' -z 3985384 ']' 00:19:49.794 22:11:31 -- common/autotest_common.sh@940 -- # kill -0 3985384 00:19:49.794 22:11:31 -- common/autotest_common.sh@941 -- # uname 00:19:49.794 22:11:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:49.794 22:11:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3985384 00:19:49.794 22:11:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:49.794 22:11:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:49.794 22:11:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3985384' 00:19:49.794 killing process with pid 3985384 00:19:49.794 22:11:31 -- common/autotest_common.sh@955 -- # kill 3985384 00:19:49.794 22:11:31 -- common/autotest_common.sh@960 -- # wait 3985384 00:19:49.794 22:11:31 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:49.794 22:11:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.794 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:19:49.794 22:11:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.794 22:11:31 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:49.794 22:11:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.794 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:19:49.794 22:11:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.794 22:11:32 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:19:49.794 22:11:32 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:49.794 22:11:32 -- common/autotest_common.sh@1598 -- # read -r file 00:19:49.794 22:11:32 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:19:49.794 22:11:32 -- common/autotest_common.sh@1597 -- # sort -u 00:19:49.794 22:11:32 -- common/autotest_common.sh@1599 -- # cat 00:19:49.794 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:19:49.794 [2024-04-24 22:11:29.685828] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:19:49.794 [2024-04-24 22:11:29.685920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3985384 ] 00:19:49.794 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.794 [2024-04-24 22:11:29.753108] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.794 [2024-04-24 22:11:29.871298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.794 [2024-04-24 22:11:30.485773] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 512f69e0-c6c2-49f7-9442-522d1b8fd5e2 already exists 00:19:49.794 [2024-04-24 22:11:30.485818] bdev.c:7651:bdev_register: *ERROR*: Unable to add uuid:512f69e0-c6c2-49f7-9442-522d1b8fd5e2 alias for bdev NVMe1n1 00:19:49.794 [2024-04-24 22:11:30.485840] bdev_nvme.c:4272:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:19:49.794 Running I/O for 1 seconds... 00:19:49.794 00:19:49.794 Latency(us) 00:19:49.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.794 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:19:49.794 NVMe0n1 : 1.00 17037.96 66.55 0.00 0.00 7502.99 3276.80 13495.56 00:19:49.794 =================================================================================================================== 00:19:49.794 Total : 17037.96 66.55 0.00 0.00 7502.99 3276.80 13495.56 00:19:49.794 Received shutdown signal, test time was about 1.000000 seconds 00:19:49.794 00:19:49.794 Latency(us) 00:19:49.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.794 =================================================================================================================== 00:19:49.794 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:49.794 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:19:49.794 22:11:32 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:49.794 22:11:32 -- common/autotest_common.sh@1598 -- # read -r file 00:19:49.794 22:11:32 -- host/multicontroller.sh@108 -- # nvmftestfini 00:19:49.794 22:11:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:49.794 22:11:32 -- nvmf/common.sh@117 -- # sync 00:19:49.794 22:11:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:49.794 22:11:32 -- nvmf/common.sh@120 -- # set +e 00:19:49.794 22:11:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:49.794 22:11:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:49.794 rmmod nvme_tcp 00:19:49.794 rmmod nvme_fabrics 00:19:50.053 rmmod nvme_keyring 00:19:50.053 22:11:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:50.053 22:11:32 -- nvmf/common.sh@124 -- # set -e 00:19:50.053 22:11:32 -- nvmf/common.sh@125 -- # return 0 00:19:50.053 22:11:32 -- nvmf/common.sh@478 -- # '[' -n 3985284 ']' 00:19:50.053 22:11:32 -- nvmf/common.sh@479 -- # killprocess 3985284 00:19:50.053 22:11:32 -- common/autotest_common.sh@936 -- # '[' -z 3985284 ']' 00:19:50.053 22:11:32 -- common/autotest_common.sh@940 -- # kill -0 3985284 00:19:50.053 22:11:32 -- common/autotest_common.sh@941 -- # uname 00:19:50.053 22:11:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:50.053 22:11:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3985284 00:19:50.053 22:11:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:50.053 22:11:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:50.053 22:11:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3985284' 00:19:50.053 killing process with pid 3985284 00:19:50.053 22:11:32 -- common/autotest_common.sh@955 -- # kill 3985284 00:19:50.053 [2024-04-24 22:11:32.108238] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:50.053 22:11:32 -- common/autotest_common.sh@960 -- # wait 3985284 00:19:50.312 22:11:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:50.312 22:11:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:50.312 22:11:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:50.312 22:11:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:50.312 22:11:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:50.312 22:11:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.312 22:11:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:50.312 22:11:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.842 22:11:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:52.842 00:19:52.842 real 0m7.959s 00:19:52.842 user 0m12.296s 00:19:52.842 sys 0m2.684s 00:19:52.842 22:11:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:52.842 22:11:34 -- common/autotest_common.sh@10 -- # set +x 00:19:52.842 ************************************ 00:19:52.842 END TEST nvmf_multicontroller 00:19:52.842 ************************************ 00:19:52.842 22:11:34 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:52.842 22:11:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:52.842 22:11:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:52.842 22:11:34 -- common/autotest_common.sh@10 -- # set +x 00:19:52.842 ************************************ 00:19:52.842 START TEST nvmf_aer 00:19:52.842 ************************************ 00:19:52.842 22:11:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:52.842 * Looking for test storage... 00:19:52.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:52.842 22:11:34 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:52.842 22:11:34 -- nvmf/common.sh@7 -- # uname -s 00:19:52.842 22:11:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:52.842 22:11:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:52.842 22:11:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:52.842 22:11:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:52.842 22:11:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:52.842 22:11:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:52.842 22:11:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:52.842 22:11:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:52.842 22:11:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:52.842 22:11:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:52.842 22:11:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:52.842 22:11:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:52.842 22:11:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:52.842 22:11:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:52.842 22:11:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:52.843 22:11:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:52.843 22:11:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:52.843 22:11:34 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.843 22:11:34 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.843 22:11:34 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.843 22:11:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.843 22:11:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.843 22:11:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.843 22:11:34 -- paths/export.sh@5 -- # export PATH 00:19:52.843 22:11:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.843 22:11:34 -- nvmf/common.sh@47 -- # : 0 00:19:52.843 22:11:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:52.843 22:11:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:52.843 22:11:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:52.843 22:11:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:52.843 22:11:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:52.843 22:11:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:52.843 22:11:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:52.843 22:11:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:52.843 22:11:34 -- host/aer.sh@11 -- # nvmftestinit 00:19:52.843 22:11:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:52.843 22:11:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.843 22:11:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:52.843 22:11:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:52.843 22:11:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:52.843 22:11:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.843 22:11:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.843 22:11:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.843 22:11:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:52.843 22:11:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:52.843 22:11:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:52.843 22:11:34 -- common/autotest_common.sh@10 -- # set +x 00:19:55.374 22:11:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:55.374 22:11:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:55.374 22:11:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:55.374 22:11:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:55.374 22:11:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:55.374 22:11:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:55.374 22:11:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:55.374 22:11:37 -- nvmf/common.sh@295 -- # net_devs=() 00:19:55.374 22:11:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:55.374 22:11:37 -- nvmf/common.sh@296 -- # e810=() 00:19:55.374 22:11:37 -- nvmf/common.sh@296 -- # local -ga e810 00:19:55.374 22:11:37 -- nvmf/common.sh@297 -- # x722=() 00:19:55.374 22:11:37 -- nvmf/common.sh@297 -- # local -ga x722 00:19:55.374 22:11:37 -- nvmf/common.sh@298 -- # mlx=() 00:19:55.374 22:11:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:55.374 22:11:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:55.374 22:11:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:55.374 22:11:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:55.374 22:11:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:55.374 22:11:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:55.374 22:11:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:55.374 22:11:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:55.374 22:11:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:55.374 22:11:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:55.374 22:11:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:55.374 22:11:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:55.374 22:11:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:55.374 22:11:37 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:55.374 22:11:37 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:55.374 22:11:37 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:55.374 22:11:37 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:55.374 22:11:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:55.374 22:11:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.374 22:11:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:55.374 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:55.374 22:11:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:55.374 22:11:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:55.374 22:11:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.374 22:11:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.374 22:11:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:55.374 22:11:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.374 22:11:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:55.374 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:55.374 22:11:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:55.374 22:11:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:55.374 22:11:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.374 22:11:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.374 22:11:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:55.374 22:11:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:55.374 22:11:37 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:55.374 22:11:37 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:55.374 22:11:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.374 22:11:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.374 22:11:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:55.374 22:11:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.374 22:11:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:55.374 Found net devices under 0000:84:00.0: cvl_0_0 00:19:55.374 22:11:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.374 22:11:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.374 22:11:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.374 22:11:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:55.374 22:11:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.374 22:11:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:55.374 Found net devices under 0000:84:00.1: cvl_0_1 00:19:55.374 22:11:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.374 22:11:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:55.374 22:11:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:55.374 22:11:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:55.374 22:11:37 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:55.374 22:11:37 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:55.374 22:11:37 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:55.374 22:11:37 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:55.374 22:11:37 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:55.374 22:11:37 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:55.374 22:11:37 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:55.374 22:11:37 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:55.374 22:11:37 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:55.374 22:11:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:55.374 22:11:37 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:55.374 22:11:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:55.374 22:11:37 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:55.374 22:11:37 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:55.374 22:11:37 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:55.374 22:11:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:55.374 22:11:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:55.374 22:11:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:55.374 22:11:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:55.374 22:11:37 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:55.374 22:11:37 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:55.375 22:11:37 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:55.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:55.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:19:55.375 00:19:55.375 --- 10.0.0.2 ping statistics --- 00:19:55.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.375 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:19:55.375 22:11:37 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:55.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:55.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:19:55.375 00:19:55.375 --- 10.0.0.1 ping statistics --- 00:19:55.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.375 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:19:55.375 22:11:37 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:55.375 22:11:37 -- nvmf/common.sh@411 -- # return 0 00:19:55.375 22:11:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:55.375 22:11:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:55.375 22:11:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:55.375 22:11:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:55.375 22:11:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:55.375 22:11:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:55.375 22:11:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:55.375 22:11:37 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:19:55.375 22:11:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:55.375 22:11:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:55.375 22:11:37 -- common/autotest_common.sh@10 -- # set +x 00:19:55.375 22:11:37 -- nvmf/common.sh@470 -- # nvmfpid=3987669 00:19:55.375 22:11:37 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:55.375 22:11:37 -- nvmf/common.sh@471 -- # waitforlisten 3987669 00:19:55.375 22:11:37 -- common/autotest_common.sh@817 -- # '[' -z 3987669 ']' 00:19:55.375 22:11:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.375 22:11:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:55.375 22:11:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.375 22:11:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:55.375 22:11:37 -- common/autotest_common.sh@10 -- # set +x 00:19:55.375 [2024-04-24 22:11:37.244003] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:19:55.375 [2024-04-24 22:11:37.244098] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.375 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.375 [2024-04-24 22:11:37.323636] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:55.375 [2024-04-24 22:11:37.448185] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.375 [2024-04-24 22:11:37.448257] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.375 [2024-04-24 22:11:37.448274] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.375 [2024-04-24 22:11:37.448293] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.375 [2024-04-24 22:11:37.448306] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.375 [2024-04-24 22:11:37.448400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.375 [2024-04-24 22:11:37.448455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.375 [2024-04-24 22:11:37.448482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:55.375 [2024-04-24 22:11:37.448486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.375 22:11:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:55.375 22:11:37 -- common/autotest_common.sh@850 -- # return 0 00:19:55.375 22:11:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:55.375 22:11:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:55.375 22:11:37 -- common/autotest_common.sh@10 -- # set +x 00:19:55.375 22:11:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.375 22:11:37 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:55.375 22:11:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.375 22:11:37 -- common/autotest_common.sh@10 -- # set +x 00:19:55.375 [2024-04-24 22:11:37.614474] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.375 22:11:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.375 22:11:37 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:19:55.375 22:11:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.375 22:11:37 -- common/autotest_common.sh@10 -- # set +x 00:19:55.634 Malloc0 00:19:55.634 22:11:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.634 22:11:37 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:19:55.634 22:11:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.634 22:11:37 -- common/autotest_common.sh@10 -- # set +x 00:19:55.634 22:11:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.634 22:11:37 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:55.634 22:11:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.634 22:11:37 -- common/autotest_common.sh@10 -- # set +x 00:19:55.634 22:11:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.634 22:11:37 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:55.634 22:11:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.634 22:11:37 -- common/autotest_common.sh@10 -- # set +x 00:19:55.634 [2024-04-24 22:11:37.667449] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:55.634 [2024-04-24 22:11:37.667782] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.634 22:11:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.634 22:11:37 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:19:55.634 22:11:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.634 22:11:37 -- common/autotest_common.sh@10 -- # set +x 00:19:55.634 [2024-04-24 22:11:37.675434] nvmf_rpc.c: 276:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:19:55.634 [ 00:19:55.634 { 00:19:55.634 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:55.634 "subtype": "Discovery", 00:19:55.634 "listen_addresses": [], 00:19:55.634 "allow_any_host": true, 00:19:55.634 "hosts": [] 00:19:55.634 }, 00:19:55.634 { 00:19:55.634 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.634 "subtype": "NVMe", 00:19:55.634 "listen_addresses": [ 00:19:55.634 { 00:19:55.634 "transport": "TCP", 00:19:55.634 "trtype": "TCP", 00:19:55.634 "adrfam": "IPv4", 00:19:55.634 "traddr": "10.0.0.2", 00:19:55.634 "trsvcid": "4420" 00:19:55.634 } 00:19:55.634 ], 00:19:55.634 "allow_any_host": true, 00:19:55.634 "hosts": [], 00:19:55.634 "serial_number": "SPDK00000000000001", 00:19:55.634 "model_number": "SPDK bdev Controller", 00:19:55.634 "max_namespaces": 2, 00:19:55.634 "min_cntlid": 1, 00:19:55.634 "max_cntlid": 65519, 00:19:55.634 "namespaces": [ 00:19:55.634 { 00:19:55.634 "nsid": 1, 00:19:55.634 "bdev_name": "Malloc0", 00:19:55.634 "name": "Malloc0", 00:19:55.634 "nguid": "27DAC3B6A2C943CD98DE03662389CED3", 00:19:55.634 "uuid": "27dac3b6-a2c9-43cd-98de-03662389ced3" 00:19:55.634 } 00:19:55.634 ] 00:19:55.634 } 00:19:55.634 ] 00:19:55.634 22:11:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.634 22:11:37 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:55.634 22:11:37 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:19:55.634 22:11:37 -- host/aer.sh@33 -- # aerpid=3987772 00:19:55.634 22:11:37 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:19:55.634 22:11:37 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:19:55.634 22:11:37 -- common/autotest_common.sh@1251 -- # local i=0 00:19:55.634 22:11:37 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:55.634 22:11:37 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:19:55.634 22:11:37 -- common/autotest_common.sh@1254 -- # i=1 00:19:55.634 22:11:37 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:19:55.634 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.634 22:11:37 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:55.634 22:11:37 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:19:55.634 22:11:37 -- common/autotest_common.sh@1254 -- # i=2 00:19:55.634 22:11:37 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:19:55.892 22:11:37 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:55.892 22:11:37 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:55.892 22:11:37 -- common/autotest_common.sh@1262 -- # return 0 00:19:55.892 22:11:37 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:19:55.892 22:11:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.892 22:11:37 -- common/autotest_common.sh@10 -- # set +x 00:19:55.892 Malloc1 00:19:55.892 22:11:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.892 22:11:37 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:19:55.892 22:11:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.892 22:11:37 -- common/autotest_common.sh@10 -- # set +x 00:19:55.892 22:11:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.892 22:11:37 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:19:55.892 22:11:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.892 22:11:37 -- common/autotest_common.sh@10 -- # set +x 00:19:55.892 Asynchronous Event Request test 00:19:55.892 Attaching to 10.0.0.2 00:19:55.892 Attached to 10.0.0.2 00:19:55.892 Registering asynchronous event callbacks... 00:19:55.892 Starting namespace attribute notice tests for all controllers... 00:19:55.892 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:55.892 aer_cb - Changed Namespace 00:19:55.892 Cleaning up... 00:19:55.892 [ 00:19:55.892 { 00:19:55.892 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:55.892 "subtype": "Discovery", 00:19:55.892 "listen_addresses": [], 00:19:55.892 "allow_any_host": true, 00:19:55.892 "hosts": [] 00:19:55.892 }, 00:19:55.892 { 00:19:55.892 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.892 "subtype": "NVMe", 00:19:55.892 "listen_addresses": [ 00:19:55.892 { 00:19:55.892 "transport": "TCP", 00:19:55.892 "trtype": "TCP", 00:19:55.892 "adrfam": "IPv4", 00:19:55.892 "traddr": "10.0.0.2", 00:19:55.892 "trsvcid": "4420" 00:19:55.892 } 00:19:55.892 ], 00:19:55.892 "allow_any_host": true, 00:19:55.892 "hosts": [], 00:19:55.892 "serial_number": "SPDK00000000000001", 00:19:55.892 "model_number": "SPDK bdev Controller", 00:19:55.892 "max_namespaces": 2, 00:19:55.892 "min_cntlid": 1, 00:19:55.892 "max_cntlid": 65519, 00:19:55.892 "namespaces": [ 00:19:55.892 { 00:19:55.892 "nsid": 1, 00:19:55.892 "bdev_name": "Malloc0", 00:19:55.892 "name": "Malloc0", 00:19:55.892 "nguid": "27DAC3B6A2C943CD98DE03662389CED3", 00:19:55.892 "uuid": "27dac3b6-a2c9-43cd-98de-03662389ced3" 00:19:55.892 }, 00:19:55.892 { 00:19:55.893 "nsid": 2, 00:19:55.893 "bdev_name": "Malloc1", 00:19:55.893 "name": "Malloc1", 00:19:55.893 "nguid": "35A7DF8D3BC84284966135FD859C7CB2", 00:19:55.893 "uuid": "35a7df8d-3bc8-4284-9661-35fd859c7cb2" 00:19:55.893 } 00:19:55.893 ] 00:19:55.893 } 00:19:55.893 ] 00:19:55.893 22:11:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.893 22:11:37 -- host/aer.sh@43 -- # wait 3987772 00:19:55.893 22:11:37 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:55.893 22:11:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.893 22:11:37 -- common/autotest_common.sh@10 -- # set +x 00:19:55.893 22:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.893 22:11:38 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:55.893 22:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.893 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:19:55.893 22:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.893 22:11:38 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:55.893 22:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.893 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:19:55.893 22:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.893 22:11:38 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:19:55.893 22:11:38 -- host/aer.sh@51 -- # nvmftestfini 00:19:55.893 22:11:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:55.893 22:11:38 -- nvmf/common.sh@117 -- # sync 00:19:55.893 22:11:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:55.893 22:11:38 -- nvmf/common.sh@120 -- # set +e 00:19:55.893 22:11:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:55.893 22:11:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:55.893 rmmod nvme_tcp 00:19:55.893 rmmod nvme_fabrics 00:19:55.893 rmmod nvme_keyring 00:19:55.893 22:11:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:55.893 22:11:38 -- nvmf/common.sh@124 -- # set -e 00:19:55.893 22:11:38 -- nvmf/common.sh@125 -- # return 0 00:19:55.893 22:11:38 -- nvmf/common.sh@478 -- # '[' -n 3987669 ']' 00:19:55.893 22:11:38 -- nvmf/common.sh@479 -- # killprocess 3987669 00:19:55.893 22:11:38 -- common/autotest_common.sh@936 -- # '[' -z 3987669 ']' 00:19:55.893 22:11:38 -- common/autotest_common.sh@940 -- # kill -0 3987669 00:19:55.893 22:11:38 -- common/autotest_common.sh@941 -- # uname 00:19:55.893 22:11:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:55.893 22:11:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3987669 00:19:55.893 22:11:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:55.893 22:11:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:55.893 22:11:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3987669' 00:19:55.893 killing process with pid 3987669 00:19:55.893 22:11:38 -- common/autotest_common.sh@955 -- # kill 3987669 00:19:55.893 [2024-04-24 22:11:38.134234] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:19:55.893 [2024-04-24 22:11:38.134276] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:55.893 22:11:38 -- common/autotest_common.sh@960 -- # wait 3987669 00:19:56.461 22:11:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:56.461 22:11:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:56.461 22:11:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:56.461 22:11:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:56.461 22:11:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:56.461 22:11:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.461 22:11:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.461 22:11:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.363 22:11:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:58.363 00:19:58.363 real 0m5.785s 00:19:58.363 user 0m4.444s 00:19:58.363 sys 0m2.195s 00:19:58.363 22:11:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:58.363 22:11:40 -- common/autotest_common.sh@10 -- # set +x 00:19:58.363 ************************************ 00:19:58.363 END TEST nvmf_aer 00:19:58.363 ************************************ 00:19:58.363 22:11:40 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:58.363 22:11:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:58.363 22:11:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:58.363 22:11:40 -- common/autotest_common.sh@10 -- # set +x 00:19:58.363 ************************************ 00:19:58.363 START TEST nvmf_async_init 00:19:58.363 ************************************ 00:19:58.363 22:11:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:58.621 * Looking for test storage... 00:19:58.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:58.621 22:11:40 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:58.621 22:11:40 -- nvmf/common.sh@7 -- # uname -s 00:19:58.621 22:11:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:58.621 22:11:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:58.621 22:11:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:58.621 22:11:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:58.621 22:11:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:58.621 22:11:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:58.621 22:11:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:58.621 22:11:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:58.621 22:11:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:58.621 22:11:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:58.621 22:11:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:58.621 22:11:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:58.622 22:11:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:58.622 22:11:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:58.622 22:11:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:58.622 22:11:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:58.622 22:11:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:58.622 22:11:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:58.622 22:11:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:58.622 22:11:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:58.622 22:11:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.622 22:11:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.622 22:11:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.622 22:11:40 -- paths/export.sh@5 -- # export PATH 00:19:58.622 22:11:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.622 22:11:40 -- nvmf/common.sh@47 -- # : 0 00:19:58.622 22:11:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:58.622 22:11:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:58.622 22:11:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:58.622 22:11:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:58.622 22:11:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:58.622 22:11:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:58.622 22:11:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:58.622 22:11:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:58.622 22:11:40 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:19:58.622 22:11:40 -- host/async_init.sh@14 -- # null_block_size=512 00:19:58.622 22:11:40 -- host/async_init.sh@15 -- # null_bdev=null0 00:19:58.622 22:11:40 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:19:58.622 22:11:40 -- host/async_init.sh@20 -- # uuidgen 00:19:58.622 22:11:40 -- host/async_init.sh@20 -- # tr -d - 00:19:58.622 22:11:40 -- host/async_init.sh@20 -- # nguid=88b3ecac32be42f092488bceb0598200 00:19:58.622 22:11:40 -- host/async_init.sh@22 -- # nvmftestinit 00:19:58.622 22:11:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:58.622 22:11:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:58.622 22:11:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:58.622 22:11:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:58.622 22:11:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:58.622 22:11:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.622 22:11:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:58.622 22:11:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.622 22:11:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:58.622 22:11:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:58.622 22:11:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:58.622 22:11:40 -- common/autotest_common.sh@10 -- # set +x 00:20:01.151 22:11:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:01.151 22:11:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:01.151 22:11:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:01.151 22:11:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:01.151 22:11:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:01.151 22:11:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:01.151 22:11:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:01.151 22:11:43 -- nvmf/common.sh@295 -- # net_devs=() 00:20:01.151 22:11:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:01.151 22:11:43 -- nvmf/common.sh@296 -- # e810=() 00:20:01.151 22:11:43 -- nvmf/common.sh@296 -- # local -ga e810 00:20:01.151 22:11:43 -- nvmf/common.sh@297 -- # x722=() 00:20:01.151 22:11:43 -- nvmf/common.sh@297 -- # local -ga x722 00:20:01.151 22:11:43 -- nvmf/common.sh@298 -- # mlx=() 00:20:01.151 22:11:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:01.151 22:11:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:01.151 22:11:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:01.151 22:11:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:01.151 22:11:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:01.151 22:11:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:01.151 22:11:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:01.151 22:11:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:01.151 22:11:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:01.151 22:11:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:01.151 22:11:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:01.151 22:11:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:01.151 22:11:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:01.151 22:11:43 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:01.151 22:11:43 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:01.151 22:11:43 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:01.151 22:11:43 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:01.151 22:11:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:01.151 22:11:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:01.151 22:11:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:01.151 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:01.151 22:11:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:01.152 22:11:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:01.152 22:11:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.152 22:11:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.152 22:11:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:01.152 22:11:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:01.152 22:11:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:01.152 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:01.152 22:11:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:01.152 22:11:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:01.152 22:11:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.152 22:11:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.152 22:11:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:01.152 22:11:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:01.152 22:11:43 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:01.152 22:11:43 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:01.152 22:11:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:01.152 22:11:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.152 22:11:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:01.152 22:11:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.152 22:11:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:01.152 Found net devices under 0000:84:00.0: cvl_0_0 00:20:01.152 22:11:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.152 22:11:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:01.152 22:11:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.152 22:11:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:01.152 22:11:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.152 22:11:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:01.152 Found net devices under 0000:84:00.1: cvl_0_1 00:20:01.152 22:11:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.152 22:11:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:01.152 22:11:43 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:01.152 22:11:43 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:01.152 22:11:43 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:01.152 22:11:43 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:01.152 22:11:43 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:01.152 22:11:43 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:01.152 22:11:43 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:01.152 22:11:43 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:01.152 22:11:43 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:01.152 22:11:43 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:01.152 22:11:43 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:01.152 22:11:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:01.152 22:11:43 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:01.152 22:11:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:01.152 22:11:43 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:01.152 22:11:43 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:01.152 22:11:43 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:01.152 22:11:43 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:01.152 22:11:43 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:01.152 22:11:43 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:01.152 22:11:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:01.152 22:11:43 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:01.152 22:11:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:01.152 22:11:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:01.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:01.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:20:01.152 00:20:01.152 --- 10.0.0.2 ping statistics --- 00:20:01.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.152 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:20:01.152 22:11:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:01.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:01.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:20:01.152 00:20:01.152 --- 10.0.0.1 ping statistics --- 00:20:01.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.152 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:20:01.152 22:11:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:01.152 22:11:43 -- nvmf/common.sh@411 -- # return 0 00:20:01.152 22:11:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:01.152 22:11:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:01.152 22:11:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:01.152 22:11:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:01.152 22:11:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:01.152 22:11:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:01.152 22:11:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:01.152 22:11:43 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:01.152 22:11:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:01.152 22:11:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:01.152 22:11:43 -- common/autotest_common.sh@10 -- # set +x 00:20:01.152 22:11:43 -- nvmf/common.sh@470 -- # nvmfpid=3989779 00:20:01.152 22:11:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:01.152 22:11:43 -- nvmf/common.sh@471 -- # waitforlisten 3989779 00:20:01.152 22:11:43 -- common/autotest_common.sh@817 -- # '[' -z 3989779 ']' 00:20:01.152 22:11:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.152 22:11:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:01.152 22:11:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.152 22:11:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:01.152 22:11:43 -- common/autotest_common.sh@10 -- # set +x 00:20:01.152 [2024-04-24 22:11:43.356152] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:20:01.152 [2024-04-24 22:11:43.356244] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.152 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.411 [2024-04-24 22:11:43.433574] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.411 [2024-04-24 22:11:43.552835] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.411 [2024-04-24 22:11:43.552904] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.411 [2024-04-24 22:11:43.552920] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.411 [2024-04-24 22:11:43.552934] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.411 [2024-04-24 22:11:43.552945] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.411 [2024-04-24 22:11:43.552978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.669 22:11:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:01.669 22:11:43 -- common/autotest_common.sh@850 -- # return 0 00:20:01.669 22:11:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:01.669 22:11:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:01.669 22:11:43 -- common/autotest_common.sh@10 -- # set +x 00:20:01.669 22:11:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.669 22:11:43 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:01.669 22:11:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.669 22:11:43 -- common/autotest_common.sh@10 -- # set +x 00:20:01.669 [2024-04-24 22:11:43.692939] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.669 22:11:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.669 22:11:43 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:01.669 22:11:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.669 22:11:43 -- common/autotest_common.sh@10 -- # set +x 00:20:01.669 null0 00:20:01.669 22:11:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.669 22:11:43 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:01.669 22:11:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.669 22:11:43 -- common/autotest_common.sh@10 -- # set +x 00:20:01.669 22:11:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.669 22:11:43 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:01.669 22:11:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.669 22:11:43 -- common/autotest_common.sh@10 -- # set +x 00:20:01.669 22:11:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.669 22:11:43 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 88b3ecac32be42f092488bceb0598200 00:20:01.669 22:11:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.669 22:11:43 -- common/autotest_common.sh@10 -- # set +x 00:20:01.669 22:11:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.669 22:11:43 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:01.669 22:11:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.669 22:11:43 -- common/autotest_common.sh@10 -- # set +x 00:20:01.669 [2024-04-24 22:11:43.732969] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:01.669 [2024-04-24 22:11:43.733258] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.669 22:11:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.669 22:11:43 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:01.669 22:11:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.669 22:11:43 -- common/autotest_common.sh@10 -- # set +x 00:20:01.927 nvme0n1 00:20:01.927 22:11:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.927 22:11:43 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:01.927 22:11:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.927 22:11:43 -- common/autotest_common.sh@10 -- # set +x 00:20:01.927 [ 00:20:01.927 { 00:20:01.927 "name": "nvme0n1", 00:20:01.927 "aliases": [ 00:20:01.927 "88b3ecac-32be-42f0-9248-8bceb0598200" 00:20:01.927 ], 00:20:01.927 "product_name": "NVMe disk", 00:20:01.927 "block_size": 512, 00:20:01.927 "num_blocks": 2097152, 00:20:01.927 "uuid": "88b3ecac-32be-42f0-9248-8bceb0598200", 00:20:01.927 "assigned_rate_limits": { 00:20:01.927 "rw_ios_per_sec": 0, 00:20:01.927 "rw_mbytes_per_sec": 0, 00:20:01.927 "r_mbytes_per_sec": 0, 00:20:01.927 "w_mbytes_per_sec": 0 00:20:01.927 }, 00:20:01.927 "claimed": false, 00:20:01.927 "zoned": false, 00:20:01.927 "supported_io_types": { 00:20:01.927 "read": true, 00:20:01.927 "write": true, 00:20:01.927 "unmap": false, 00:20:01.927 "write_zeroes": true, 00:20:01.927 "flush": true, 00:20:01.927 "reset": true, 00:20:01.927 "compare": true, 00:20:01.927 "compare_and_write": true, 00:20:01.927 "abort": true, 00:20:01.927 "nvme_admin": true, 00:20:01.927 "nvme_io": true 00:20:01.927 }, 00:20:01.927 "memory_domains": [ 00:20:01.927 { 00:20:01.927 "dma_device_id": "system", 00:20:01.927 "dma_device_type": 1 00:20:01.927 } 00:20:01.927 ], 00:20:01.927 "driver_specific": { 00:20:01.927 "nvme": [ 00:20:01.927 { 00:20:01.927 "trid": { 00:20:01.927 "trtype": "TCP", 00:20:01.927 "adrfam": "IPv4", 00:20:01.927 "traddr": "10.0.0.2", 00:20:01.927 "trsvcid": "4420", 00:20:01.927 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:01.927 }, 00:20:01.927 "ctrlr_data": { 00:20:01.927 "cntlid": 1, 00:20:01.927 "vendor_id": "0x8086", 00:20:01.927 "model_number": "SPDK bdev Controller", 00:20:01.927 "serial_number": "00000000000000000000", 00:20:01.927 "firmware_revision": "24.05", 00:20:01.927 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:01.927 "oacs": { 00:20:01.927 "security": 0, 00:20:01.927 "format": 0, 00:20:01.927 "firmware": 0, 00:20:01.927 "ns_manage": 0 00:20:01.927 }, 00:20:01.927 "multi_ctrlr": true, 00:20:01.927 "ana_reporting": false 00:20:01.927 }, 00:20:01.927 "vs": { 00:20:01.927 "nvme_version": "1.3" 00:20:01.927 }, 00:20:01.927 "ns_data": { 00:20:01.927 "id": 1, 00:20:01.927 "can_share": true 00:20:01.927 } 00:20:01.927 } 00:20:01.927 ], 00:20:01.927 "mp_policy": "active_passive" 00:20:01.927 } 00:20:01.927 } 00:20:01.927 ] 00:20:01.927 22:11:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.927 22:11:43 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:01.927 22:11:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.927 22:11:43 -- common/autotest_common.sh@10 -- # set +x 00:20:01.927 [2024-04-24 22:11:43.985823] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:01.927 [2024-04-24 22:11:43.985912] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c5310 (9): Bad file descriptor 00:20:01.927 [2024-04-24 22:11:44.128563] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:01.927 22:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.927 22:11:44 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:01.927 22:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.927 22:11:44 -- common/autotest_common.sh@10 -- # set +x 00:20:01.927 [ 00:20:01.927 { 00:20:01.927 "name": "nvme0n1", 00:20:01.927 "aliases": [ 00:20:01.927 "88b3ecac-32be-42f0-9248-8bceb0598200" 00:20:01.927 ], 00:20:01.927 "product_name": "NVMe disk", 00:20:01.927 "block_size": 512, 00:20:01.927 "num_blocks": 2097152, 00:20:01.927 "uuid": "88b3ecac-32be-42f0-9248-8bceb0598200", 00:20:01.927 "assigned_rate_limits": { 00:20:01.927 "rw_ios_per_sec": 0, 00:20:01.927 "rw_mbytes_per_sec": 0, 00:20:01.927 "r_mbytes_per_sec": 0, 00:20:01.927 "w_mbytes_per_sec": 0 00:20:01.927 }, 00:20:01.927 "claimed": false, 00:20:01.927 "zoned": false, 00:20:01.927 "supported_io_types": { 00:20:01.927 "read": true, 00:20:01.927 "write": true, 00:20:01.927 "unmap": false, 00:20:01.927 "write_zeroes": true, 00:20:01.927 "flush": true, 00:20:01.927 "reset": true, 00:20:01.927 "compare": true, 00:20:01.927 "compare_and_write": true, 00:20:01.927 "abort": true, 00:20:01.927 "nvme_admin": true, 00:20:01.927 "nvme_io": true 00:20:01.927 }, 00:20:01.927 "memory_domains": [ 00:20:01.927 { 00:20:01.927 "dma_device_id": "system", 00:20:01.927 "dma_device_type": 1 00:20:01.927 } 00:20:01.927 ], 00:20:01.927 "driver_specific": { 00:20:01.927 "nvme": [ 00:20:01.927 { 00:20:01.927 "trid": { 00:20:01.927 "trtype": "TCP", 00:20:01.927 "adrfam": "IPv4", 00:20:01.927 "traddr": "10.0.0.2", 00:20:01.927 "trsvcid": "4420", 00:20:01.927 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:01.927 }, 00:20:01.927 "ctrlr_data": { 00:20:01.927 "cntlid": 2, 00:20:01.927 "vendor_id": "0x8086", 00:20:01.927 "model_number": "SPDK bdev Controller", 00:20:01.927 "serial_number": "00000000000000000000", 00:20:01.927 "firmware_revision": "24.05", 00:20:01.927 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:01.927 "oacs": { 00:20:01.927 "security": 0, 00:20:01.927 "format": 0, 00:20:01.927 "firmware": 0, 00:20:01.927 "ns_manage": 0 00:20:01.927 }, 00:20:01.927 "multi_ctrlr": true, 00:20:01.927 "ana_reporting": false 00:20:01.927 }, 00:20:01.927 "vs": { 00:20:01.927 "nvme_version": "1.3" 00:20:01.927 }, 00:20:01.927 "ns_data": { 00:20:01.927 "id": 1, 00:20:01.927 "can_share": true 00:20:01.927 } 00:20:01.927 } 00:20:01.927 ], 00:20:01.927 "mp_policy": "active_passive" 00:20:01.927 } 00:20:01.927 } 00:20:01.927 ] 00:20:01.928 22:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.928 22:11:44 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.928 22:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.928 22:11:44 -- common/autotest_common.sh@10 -- # set +x 00:20:01.928 22:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.928 22:11:44 -- host/async_init.sh@53 -- # mktemp 00:20:01.928 22:11:44 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.iZbLuX35ez 00:20:01.928 22:11:44 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:01.928 22:11:44 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.iZbLuX35ez 00:20:01.928 22:11:44 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:01.928 22:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.928 22:11:44 -- common/autotest_common.sh@10 -- # set +x 00:20:01.928 22:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.928 22:11:44 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:01.928 22:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.928 22:11:44 -- common/autotest_common.sh@10 -- # set +x 00:20:01.928 [2024-04-24 22:11:44.178443] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:01.928 [2024-04-24 22:11:44.178571] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:02.186 22:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.186 22:11:44 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iZbLuX35ez 00:20:02.186 22:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.186 22:11:44 -- common/autotest_common.sh@10 -- # set +x 00:20:02.186 [2024-04-24 22:11:44.186469] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:02.186 22:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.187 22:11:44 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iZbLuX35ez 00:20:02.187 22:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.187 22:11:44 -- common/autotest_common.sh@10 -- # set +x 00:20:02.187 [2024-04-24 22:11:44.194481] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:02.187 [2024-04-24 22:11:44.194541] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:02.187 nvme0n1 00:20:02.187 22:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.187 22:11:44 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:02.187 22:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.187 22:11:44 -- common/autotest_common.sh@10 -- # set +x 00:20:02.187 [ 00:20:02.187 { 00:20:02.187 "name": "nvme0n1", 00:20:02.187 "aliases": [ 00:20:02.187 "88b3ecac-32be-42f0-9248-8bceb0598200" 00:20:02.187 ], 00:20:02.187 "product_name": "NVMe disk", 00:20:02.187 "block_size": 512, 00:20:02.187 "num_blocks": 2097152, 00:20:02.187 "uuid": "88b3ecac-32be-42f0-9248-8bceb0598200", 00:20:02.187 "assigned_rate_limits": { 00:20:02.187 "rw_ios_per_sec": 0, 00:20:02.187 "rw_mbytes_per_sec": 0, 00:20:02.187 "r_mbytes_per_sec": 0, 00:20:02.187 "w_mbytes_per_sec": 0 00:20:02.187 }, 00:20:02.187 "claimed": false, 00:20:02.187 "zoned": false, 00:20:02.187 "supported_io_types": { 00:20:02.187 "read": true, 00:20:02.187 "write": true, 00:20:02.187 "unmap": false, 00:20:02.187 "write_zeroes": true, 00:20:02.187 "flush": true, 00:20:02.187 "reset": true, 00:20:02.187 "compare": true, 00:20:02.187 "compare_and_write": true, 00:20:02.187 "abort": true, 00:20:02.187 "nvme_admin": true, 00:20:02.187 "nvme_io": true 00:20:02.187 }, 00:20:02.187 "memory_domains": [ 00:20:02.187 { 00:20:02.187 "dma_device_id": "system", 00:20:02.187 "dma_device_type": 1 00:20:02.187 } 00:20:02.187 ], 00:20:02.187 "driver_specific": { 00:20:02.187 "nvme": [ 00:20:02.187 { 00:20:02.187 "trid": { 00:20:02.187 "trtype": "TCP", 00:20:02.187 "adrfam": "IPv4", 00:20:02.187 "traddr": "10.0.0.2", 00:20:02.187 "trsvcid": "4421", 00:20:02.187 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:02.187 }, 00:20:02.187 "ctrlr_data": { 00:20:02.187 "cntlid": 3, 00:20:02.187 "vendor_id": "0x8086", 00:20:02.187 "model_number": "SPDK bdev Controller", 00:20:02.187 "serial_number": "00000000000000000000", 00:20:02.187 "firmware_revision": "24.05", 00:20:02.187 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:02.187 "oacs": { 00:20:02.187 "security": 0, 00:20:02.187 "format": 0, 00:20:02.187 "firmware": 0, 00:20:02.187 "ns_manage": 0 00:20:02.187 }, 00:20:02.187 "multi_ctrlr": true, 00:20:02.187 "ana_reporting": false 00:20:02.187 }, 00:20:02.187 "vs": { 00:20:02.187 "nvme_version": "1.3" 00:20:02.187 }, 00:20:02.187 "ns_data": { 00:20:02.187 "id": 1, 00:20:02.187 "can_share": true 00:20:02.187 } 00:20:02.187 } 00:20:02.187 ], 00:20:02.187 "mp_policy": "active_passive" 00:20:02.187 } 00:20:02.187 } 00:20:02.187 ] 00:20:02.187 22:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.187 22:11:44 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.187 22:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.187 22:11:44 -- common/autotest_common.sh@10 -- # set +x 00:20:02.187 22:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.187 22:11:44 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.iZbLuX35ez 00:20:02.187 22:11:44 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:02.187 22:11:44 -- host/async_init.sh@78 -- # nvmftestfini 00:20:02.187 22:11:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:02.187 22:11:44 -- nvmf/common.sh@117 -- # sync 00:20:02.187 22:11:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:02.187 22:11:44 -- nvmf/common.sh@120 -- # set +e 00:20:02.187 22:11:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:02.187 22:11:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:02.187 rmmod nvme_tcp 00:20:02.187 rmmod nvme_fabrics 00:20:02.187 rmmod nvme_keyring 00:20:02.187 22:11:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:02.187 22:11:44 -- nvmf/common.sh@124 -- # set -e 00:20:02.187 22:11:44 -- nvmf/common.sh@125 -- # return 0 00:20:02.187 22:11:44 -- nvmf/common.sh@478 -- # '[' -n 3989779 ']' 00:20:02.187 22:11:44 -- nvmf/common.sh@479 -- # killprocess 3989779 00:20:02.187 22:11:44 -- common/autotest_common.sh@936 -- # '[' -z 3989779 ']' 00:20:02.187 22:11:44 -- common/autotest_common.sh@940 -- # kill -0 3989779 00:20:02.187 22:11:44 -- common/autotest_common.sh@941 -- # uname 00:20:02.187 22:11:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:02.187 22:11:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3989779 00:20:02.187 22:11:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:02.187 22:11:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:02.187 22:11:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3989779' 00:20:02.187 killing process with pid 3989779 00:20:02.187 22:11:44 -- common/autotest_common.sh@955 -- # kill 3989779 00:20:02.187 [2024-04-24 22:11:44.375397] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:02.187 [2024-04-24 22:11:44.375438] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:02.187 [2024-04-24 22:11:44.375455] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:02.187 22:11:44 -- common/autotest_common.sh@960 -- # wait 3989779 00:20:02.446 22:11:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:02.446 22:11:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:02.446 22:11:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:02.446 22:11:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:02.446 22:11:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:02.446 22:11:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.446 22:11:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:02.446 22:11:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.979 22:11:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:04.979 00:20:04.979 real 0m6.073s 00:20:04.979 user 0m2.218s 00:20:04.979 sys 0m2.265s 00:20:04.979 22:11:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:04.979 22:11:46 -- common/autotest_common.sh@10 -- # set +x 00:20:04.979 ************************************ 00:20:04.979 END TEST nvmf_async_init 00:20:04.979 ************************************ 00:20:04.979 22:11:46 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:04.979 22:11:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:04.979 22:11:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:04.979 22:11:46 -- common/autotest_common.sh@10 -- # set +x 00:20:04.979 ************************************ 00:20:04.979 START TEST dma 00:20:04.979 ************************************ 00:20:04.979 22:11:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:04.979 * Looking for test storage... 00:20:04.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:04.979 22:11:46 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:04.979 22:11:46 -- nvmf/common.sh@7 -- # uname -s 00:20:04.979 22:11:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:04.979 22:11:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:04.979 22:11:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:04.979 22:11:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:04.979 22:11:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:04.979 22:11:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:04.979 22:11:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:04.979 22:11:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:04.979 22:11:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:04.979 22:11:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:04.979 22:11:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:04.979 22:11:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:04.979 22:11:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:04.979 22:11:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:04.979 22:11:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:04.979 22:11:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:04.979 22:11:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:04.979 22:11:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:04.979 22:11:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:04.979 22:11:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:04.979 22:11:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.979 22:11:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.979 22:11:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.979 22:11:46 -- paths/export.sh@5 -- # export PATH 00:20:04.979 22:11:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.979 22:11:46 -- nvmf/common.sh@47 -- # : 0 00:20:04.979 22:11:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:04.979 22:11:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:04.979 22:11:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:04.979 22:11:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:04.979 22:11:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:04.979 22:11:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:04.979 22:11:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:04.979 22:11:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:04.979 22:11:46 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:04.979 22:11:46 -- host/dma.sh@13 -- # exit 0 00:20:04.979 00:20:04.979 real 0m0.070s 00:20:04.979 user 0m0.036s 00:20:04.979 sys 0m0.040s 00:20:04.979 22:11:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:04.979 22:11:46 -- common/autotest_common.sh@10 -- # set +x 00:20:04.979 ************************************ 00:20:04.979 END TEST dma 00:20:04.979 ************************************ 00:20:04.979 22:11:46 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:04.979 22:11:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:04.979 22:11:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:04.979 22:11:46 -- common/autotest_common.sh@10 -- # set +x 00:20:04.979 ************************************ 00:20:04.979 START TEST nvmf_identify 00:20:04.979 ************************************ 00:20:04.980 22:11:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:04.980 * Looking for test storage... 00:20:04.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:04.980 22:11:47 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:04.980 22:11:47 -- nvmf/common.sh@7 -- # uname -s 00:20:04.980 22:11:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:04.980 22:11:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:04.980 22:11:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:04.980 22:11:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:04.980 22:11:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:04.980 22:11:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:04.980 22:11:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:04.980 22:11:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:04.980 22:11:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:04.980 22:11:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:04.980 22:11:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:04.980 22:11:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:04.980 22:11:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:04.980 22:11:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:04.980 22:11:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:04.980 22:11:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:04.980 22:11:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:04.980 22:11:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:04.980 22:11:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:04.980 22:11:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:04.980 22:11:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.980 22:11:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.980 22:11:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.980 22:11:47 -- paths/export.sh@5 -- # export PATH 00:20:04.980 22:11:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.980 22:11:47 -- nvmf/common.sh@47 -- # : 0 00:20:04.980 22:11:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:04.980 22:11:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:04.980 22:11:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:04.980 22:11:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:04.980 22:11:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:04.980 22:11:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:04.980 22:11:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:04.980 22:11:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:04.980 22:11:47 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:04.980 22:11:47 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:04.980 22:11:47 -- host/identify.sh@14 -- # nvmftestinit 00:20:04.980 22:11:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:04.980 22:11:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.980 22:11:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:04.980 22:11:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:04.980 22:11:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:04.980 22:11:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.980 22:11:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:04.980 22:11:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.980 22:11:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:04.980 22:11:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:04.980 22:11:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:04.980 22:11:47 -- common/autotest_common.sh@10 -- # set +x 00:20:07.513 22:11:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:07.513 22:11:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:07.513 22:11:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:07.513 22:11:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:07.513 22:11:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:07.513 22:11:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:07.513 22:11:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:07.513 22:11:49 -- nvmf/common.sh@295 -- # net_devs=() 00:20:07.513 22:11:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:07.513 22:11:49 -- nvmf/common.sh@296 -- # e810=() 00:20:07.513 22:11:49 -- nvmf/common.sh@296 -- # local -ga e810 00:20:07.513 22:11:49 -- nvmf/common.sh@297 -- # x722=() 00:20:07.513 22:11:49 -- nvmf/common.sh@297 -- # local -ga x722 00:20:07.513 22:11:49 -- nvmf/common.sh@298 -- # mlx=() 00:20:07.513 22:11:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:07.513 22:11:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:07.513 22:11:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:07.513 22:11:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:07.513 22:11:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:07.513 22:11:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:07.513 22:11:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:07.513 22:11:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:07.513 22:11:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:07.513 22:11:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:07.513 22:11:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:07.513 22:11:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:07.513 22:11:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:07.513 22:11:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:07.513 22:11:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:07.513 22:11:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:07.513 22:11:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:07.513 22:11:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:07.513 22:11:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:07.513 22:11:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:07.513 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:07.513 22:11:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:07.513 22:11:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:07.513 22:11:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.513 22:11:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.513 22:11:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:07.513 22:11:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:07.513 22:11:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:07.513 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:07.513 22:11:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:07.513 22:11:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:07.513 22:11:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.513 22:11:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.513 22:11:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:07.513 22:11:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:07.513 22:11:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:07.513 22:11:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:07.513 22:11:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:07.513 22:11:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.513 22:11:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:07.513 22:11:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.513 22:11:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:07.513 Found net devices under 0000:84:00.0: cvl_0_0 00:20:07.513 22:11:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.513 22:11:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:07.513 22:11:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.513 22:11:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:07.513 22:11:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.513 22:11:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:07.513 Found net devices under 0000:84:00.1: cvl_0_1 00:20:07.513 22:11:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.513 22:11:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:07.513 22:11:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:07.513 22:11:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:07.513 22:11:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:07.513 22:11:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:07.513 22:11:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:07.513 22:11:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:07.513 22:11:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:07.513 22:11:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:07.513 22:11:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:07.513 22:11:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:07.513 22:11:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:07.513 22:11:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:07.513 22:11:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:07.513 22:11:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:07.513 22:11:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:07.513 22:11:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:07.513 22:11:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:07.513 22:11:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:07.513 22:11:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:07.513 22:11:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:07.513 22:11:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:07.513 22:11:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:07.513 22:11:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:07.513 22:11:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:07.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:07.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:20:07.513 00:20:07.513 --- 10.0.0.2 ping statistics --- 00:20:07.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.513 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:20:07.513 22:11:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:07.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:07.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:20:07.513 00:20:07.513 --- 10.0.0.1 ping statistics --- 00:20:07.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.514 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:20:07.514 22:11:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:07.514 22:11:49 -- nvmf/common.sh@411 -- # return 0 00:20:07.514 22:11:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:07.514 22:11:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:07.514 22:11:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:07.514 22:11:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:07.514 22:11:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:07.514 22:11:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:07.514 22:11:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:07.514 22:11:49 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:07.514 22:11:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:07.514 22:11:49 -- common/autotest_common.sh@10 -- # set +x 00:20:07.514 22:11:49 -- host/identify.sh@19 -- # nvmfpid=3992057 00:20:07.514 22:11:49 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:07.514 22:11:49 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:07.514 22:11:49 -- host/identify.sh@23 -- # waitforlisten 3992057 00:20:07.514 22:11:49 -- common/autotest_common.sh@817 -- # '[' -z 3992057 ']' 00:20:07.514 22:11:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.514 22:11:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:07.514 22:11:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.514 22:11:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:07.514 22:11:49 -- common/autotest_common.sh@10 -- # set +x 00:20:07.514 [2024-04-24 22:11:49.636053] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:20:07.514 [2024-04-24 22:11:49.636148] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.514 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.514 [2024-04-24 22:11:49.716225] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:07.772 [2024-04-24 22:11:49.837020] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.772 [2024-04-24 22:11:49.837088] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.772 [2024-04-24 22:11:49.837104] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.772 [2024-04-24 22:11:49.837118] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.772 [2024-04-24 22:11:49.837130] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.772 [2024-04-24 22:11:49.837232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.772 [2024-04-24 22:11:49.837307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.772 [2024-04-24 22:11:49.837358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:07.772 [2024-04-24 22:11:49.837361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.772 22:11:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:07.772 22:11:49 -- common/autotest_common.sh@850 -- # return 0 00:20:07.772 22:11:49 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:07.772 22:11:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.772 22:11:49 -- common/autotest_common.sh@10 -- # set +x 00:20:07.772 [2024-04-24 22:11:49.976354] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.772 22:11:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.772 22:11:49 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:07.772 22:11:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:07.772 22:11:49 -- common/autotest_common.sh@10 -- # set +x 00:20:07.772 22:11:50 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:07.772 22:11:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.772 22:11:50 -- common/autotest_common.sh@10 -- # set +x 00:20:08.031 Malloc0 00:20:08.031 22:11:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.031 22:11:50 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:08.031 22:11:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.031 22:11:50 -- common/autotest_common.sh@10 -- # set +x 00:20:08.031 22:11:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.031 22:11:50 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:08.031 22:11:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.031 22:11:50 -- common/autotest_common.sh@10 -- # set +x 00:20:08.031 22:11:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.031 22:11:50 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:08.031 22:11:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.031 22:11:50 -- common/autotest_common.sh@10 -- # set +x 00:20:08.031 [2024-04-24 22:11:50.053330] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:08.031 [2024-04-24 22:11:50.053700] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.032 22:11:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.032 22:11:50 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:08.032 22:11:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.032 22:11:50 -- common/autotest_common.sh@10 -- # set +x 00:20:08.032 22:11:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.032 22:11:50 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:08.032 22:11:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.032 22:11:50 -- common/autotest_common.sh@10 -- # set +x 00:20:08.032 [2024-04-24 22:11:50.069348] nvmf_rpc.c: 276:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:08.032 [ 00:20:08.032 { 00:20:08.032 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:08.032 "subtype": "Discovery", 00:20:08.032 "listen_addresses": [ 00:20:08.032 { 00:20:08.032 "transport": "TCP", 00:20:08.032 "trtype": "TCP", 00:20:08.032 "adrfam": "IPv4", 00:20:08.032 "traddr": "10.0.0.2", 00:20:08.032 "trsvcid": "4420" 00:20:08.032 } 00:20:08.032 ], 00:20:08.032 "allow_any_host": true, 00:20:08.032 "hosts": [] 00:20:08.032 }, 00:20:08.032 { 00:20:08.032 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.032 "subtype": "NVMe", 00:20:08.032 "listen_addresses": [ 00:20:08.032 { 00:20:08.032 "transport": "TCP", 00:20:08.032 "trtype": "TCP", 00:20:08.032 "adrfam": "IPv4", 00:20:08.032 "traddr": "10.0.0.2", 00:20:08.032 "trsvcid": "4420" 00:20:08.032 } 00:20:08.032 ], 00:20:08.032 "allow_any_host": true, 00:20:08.032 "hosts": [], 00:20:08.032 "serial_number": "SPDK00000000000001", 00:20:08.032 "model_number": "SPDK bdev Controller", 00:20:08.032 "max_namespaces": 32, 00:20:08.032 "min_cntlid": 1, 00:20:08.032 "max_cntlid": 65519, 00:20:08.032 "namespaces": [ 00:20:08.032 { 00:20:08.032 "nsid": 1, 00:20:08.032 "bdev_name": "Malloc0", 00:20:08.032 "name": "Malloc0", 00:20:08.032 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:08.032 "eui64": "ABCDEF0123456789", 00:20:08.032 "uuid": "134aac49-91ec-4900-8d53-8c8ab534a946" 00:20:08.032 } 00:20:08.032 ] 00:20:08.032 } 00:20:08.032 ] 00:20:08.032 22:11:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.032 22:11:50 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:08.032 [2024-04-24 22:11:50.112480] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:20:08.032 [2024-04-24 22:11:50.112582] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3992086 ] 00:20:08.032 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.032 [2024-04-24 22:11:50.165102] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:08.032 [2024-04-24 22:11:50.165181] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:08.032 [2024-04-24 22:11:50.165192] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:08.032 [2024-04-24 22:11:50.165212] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:08.032 [2024-04-24 22:11:50.165227] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:08.032 [2024-04-24 22:11:50.168453] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:08.032 [2024-04-24 22:11:50.168524] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x6acd00 0 00:20:08.032 [2024-04-24 22:11:50.168776] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:08.032 [2024-04-24 22:11:50.168802] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:08.032 [2024-04-24 22:11:50.168812] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:08.032 [2024-04-24 22:11:50.168820] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:08.032 [2024-04-24 22:11:50.168889] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.032 [2024-04-24 22:11:50.168903] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.032 [2024-04-24 22:11:50.168923] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6acd00) 00:20:08.032 [2024-04-24 22:11:50.168947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:08.032 [2024-04-24 22:11:50.168975] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70bec0, cid 0, qid 0 00:20:08.032 [2024-04-24 22:11:50.176412] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.032 [2024-04-24 22:11:50.176432] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.032 [2024-04-24 22:11:50.176440] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.032 [2024-04-24 22:11:50.176450] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70bec0) on tqpair=0x6acd00 00:20:08.032 [2024-04-24 22:11:50.176468] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:08.032 [2024-04-24 22:11:50.176481] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:08.032 [2024-04-24 22:11:50.176493] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:08.032 [2024-04-24 22:11:50.176518] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.032 [2024-04-24 22:11:50.176527] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.032 [2024-04-24 22:11:50.176535] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6acd00) 00:20:08.032 [2024-04-24 22:11:50.176547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.032 [2024-04-24 22:11:50.176575] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70bec0, cid 0, qid 0 00:20:08.032 [2024-04-24 22:11:50.176760] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.032 [2024-04-24 22:11:50.176777] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.032 [2024-04-24 22:11:50.176784] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.032 [2024-04-24 22:11:50.176792] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70bec0) on tqpair=0x6acd00 00:20:08.032 [2024-04-24 22:11:50.176803] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:08.032 [2024-04-24 22:11:50.176818] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:08.032 [2024-04-24 22:11:50.176839] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.032 [2024-04-24 22:11:50.176848] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.032 [2024-04-24 22:11:50.176855] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6acd00) 00:20:08.032 [2024-04-24 22:11:50.176868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.032 [2024-04-24 22:11:50.176892] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70bec0, cid 0, qid 0 00:20:08.032 [2024-04-24 22:11:50.177053] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.032 [2024-04-24 22:11:50.177069] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.032 [2024-04-24 22:11:50.177077] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.032 [2024-04-24 22:11:50.177084] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70bec0) on tqpair=0x6acd00 00:20:08.032 [2024-04-24 22:11:50.177096] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:08.032 [2024-04-24 22:11:50.177113] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:08.032 [2024-04-24 22:11:50.177126] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.032 [2024-04-24 22:11:50.177134] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.032 [2024-04-24 22:11:50.177141] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6acd00) 00:20:08.032 [2024-04-24 22:11:50.177153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.032 [2024-04-24 22:11:50.177176] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70bec0, cid 0, qid 0 00:20:08.032 [2024-04-24 22:11:50.177357] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.032 [2024-04-24 22:11:50.177374] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.032 [2024-04-24 22:11:50.177381] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.032 [2024-04-24 22:11:50.177388] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70bec0) on tqpair=0x6acd00 00:20:08.032 [2024-04-24 22:11:50.177408] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:08.032 [2024-04-24 22:11:50.177428] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.032 [2024-04-24 22:11:50.177438] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.032 [2024-04-24 22:11:50.177445] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6acd00) 00:20:08.032 [2024-04-24 22:11:50.177457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.032 [2024-04-24 22:11:50.177481] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70bec0, cid 0, qid 0 00:20:08.032 [2024-04-24 22:11:50.177679] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.032 [2024-04-24 22:11:50.177692] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.032 [2024-04-24 22:11:50.177700] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.032 [2024-04-24 22:11:50.177707] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70bec0) on tqpair=0x6acd00 00:20:08.032 [2024-04-24 22:11:50.177717] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:08.032 [2024-04-24 22:11:50.177727] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:08.032 [2024-04-24 22:11:50.177741] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:08.032 [2024-04-24 22:11:50.177857] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:08.032 [2024-04-24 22:11:50.177879] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:08.032 [2024-04-24 22:11:50.177895] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.032 [2024-04-24 22:11:50.177904] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.032 [2024-04-24 22:11:50.177911] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6acd00) 00:20:08.033 [2024-04-24 22:11:50.177923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.033 [2024-04-24 22:11:50.177946] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70bec0, cid 0, qid 0 00:20:08.033 [2024-04-24 22:11:50.178143] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.033 [2024-04-24 22:11:50.178157] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.033 [2024-04-24 22:11:50.178164] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.033 [2024-04-24 22:11:50.178171] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70bec0) on tqpair=0x6acd00 00:20:08.033 [2024-04-24 22:11:50.178181] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:08.033 [2024-04-24 22:11:50.178199] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.033 [2024-04-24 22:11:50.178209] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.033 [2024-04-24 22:11:50.178216] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6acd00) 00:20:08.033 [2024-04-24 22:11:50.178227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.033 [2024-04-24 22:11:50.178250] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70bec0, cid 0, qid 0 00:20:08.033 [2024-04-24 22:11:50.178414] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.033 [2024-04-24 22:11:50.178431] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.033 [2024-04-24 22:11:50.178439] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.033 [2024-04-24 22:11:50.178446] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70bec0) on tqpair=0x6acd00 00:20:08.033 [2024-04-24 22:11:50.178455] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:08.033 [2024-04-24 22:11:50.178465] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:08.033 [2024-04-24 22:11:50.178479] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:08.033 [2024-04-24 22:11:50.178501] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:08.033 [2024-04-24 22:11:50.178521] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.033 [2024-04-24 22:11:50.178531] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6acd00) 00:20:08.033 [2024-04-24 22:11:50.178543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.033 [2024-04-24 22:11:50.178567] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70bec0, cid 0, qid 0 00:20:08.033 [2024-04-24 22:11:50.178812] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:08.033 [2024-04-24 22:11:50.178826] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:08.033 [2024-04-24 22:11:50.178833] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:08.033 [2024-04-24 22:11:50.178841] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6acd00): datao=0, datal=4096, cccid=0 00:20:08.033 [2024-04-24 22:11:50.178855] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x70bec0) on tqpair(0x6acd00): expected_datao=0, payload_size=4096 00:20:08.033 [2024-04-24 22:11:50.178865] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.033 [2024-04-24 22:11:50.178884] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:08.033 [2024-04-24 22:11:50.178896] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:08.033 [2024-04-24 22:11:50.224409] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.033 [2024-04-24 22:11:50.224430] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.033 [2024-04-24 22:11:50.224438] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.033 [2024-04-24 22:11:50.224446] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70bec0) on tqpair=0x6acd00 00:20:08.033 [2024-04-24 22:11:50.224462] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:08.033 [2024-04-24 22:11:50.224472] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:08.033 [2024-04-24 22:11:50.224482] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:08.033 [2024-04-24 22:11:50.224491] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:08.033 [2024-04-24 22:11:50.224500] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:08.033 [2024-04-24 22:11:50.224509] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:08.033 [2024-04-24 22:11:50.224527] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:08.033 [2024-04-24 22:11:50.224541] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.033 [2024-04-24 22:11:50.224549] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.033 [2024-04-24 22:11:50.224557] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6acd00) 00:20:08.033 [2024-04-24 22:11:50.224570] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:08.033 [2024-04-24 22:11:50.224596] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70bec0, cid 0, qid 0 00:20:08.033 [2024-04-24 22:11:50.224802] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.033 [2024-04-24 22:11:50.224819] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.033 [2024-04-24 22:11:50.224826] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.033 [2024-04-24 22:11:50.224834] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70bec0) on tqpair=0x6acd00 00:20:08.033 [2024-04-24 22:11:50.224848] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.033 [2024-04-24 22:11:50.224857] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.033 [2024-04-24 22:11:50.224864] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6acd00) 00:20:08.033 [2024-04-24 22:11:50.224875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.033 [2024-04-24 22:11:50.224886] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.033 [2024-04-24 22:11:50.224894] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.033 [2024-04-24 22:11:50.224901] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x6acd00) 00:20:08.033 [2024-04-24 22:11:50.224911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.033 [2024-04-24 22:11:50.224922] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.033 [2024-04-24 22:11:50.224929] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.033 [2024-04-24 22:11:50.224941] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x6acd00) 00:20:08.033 [2024-04-24 22:11:50.224952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.033 [2024-04-24 22:11:50.224963] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.033 [2024-04-24 22:11:50.224971] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.033 [2024-04-24 22:11:50.224978] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6acd00) 00:20:08.033 [2024-04-24 22:11:50.224987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.033 [2024-04-24 22:11:50.224997] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:08.033 [2024-04-24 22:11:50.225029] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:08.033 [2024-04-24 22:11:50.225043] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.033 [2024-04-24 22:11:50.225051] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6acd00) 00:20:08.033 [2024-04-24 22:11:50.225063] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.033 [2024-04-24 22:11:50.225088] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70bec0, cid 0, qid 0 00:20:08.033 [2024-04-24 22:11:50.225101] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70c020, cid 1, qid 0 00:20:08.033 [2024-04-24 22:11:50.225109] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70c180, cid 2, qid 0 00:20:08.033 [2024-04-24 22:11:50.225118] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70c2e0, cid 3, qid 0 00:20:08.033 [2024-04-24 22:11:50.225126] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70c440, cid 4, qid 0 00:20:08.033 [2024-04-24 22:11:50.225385] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.033 [2024-04-24 22:11:50.225407] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.033 [2024-04-24 22:11:50.225416] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.033 [2024-04-24 22:11:50.225425] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70c440) on tqpair=0x6acd00 00:20:08.033 [2024-04-24 22:11:50.225436] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:08.033 [2024-04-24 22:11:50.225446] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:08.033 [2024-04-24 22:11:50.225466] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.033 [2024-04-24 22:11:50.225477] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6acd00) 00:20:08.033 [2024-04-24 22:11:50.225489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.033 [2024-04-24 22:11:50.225512] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70c440, cid 4, qid 0 00:20:08.033 [2024-04-24 22:11:50.225711] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:08.033 [2024-04-24 22:11:50.225728] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:08.033 [2024-04-24 22:11:50.225735] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:08.033 [2024-04-24 22:11:50.225742] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6acd00): datao=0, datal=4096, cccid=4 00:20:08.033 [2024-04-24 22:11:50.225751] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x70c440) on tqpair(0x6acd00): expected_datao=0, payload_size=4096 00:20:08.033 [2024-04-24 22:11:50.225760] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.033 [2024-04-24 22:11:50.225776] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:08.033 [2024-04-24 22:11:50.225785] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:08.033 [2024-04-24 22:11:50.225799] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.033 [2024-04-24 22:11:50.225809] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.033 [2024-04-24 22:11:50.225816] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.034 [2024-04-24 22:11:50.225824] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70c440) on tqpair=0x6acd00 00:20:08.034 [2024-04-24 22:11:50.225845] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:08.034 [2024-04-24 22:11:50.225879] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.034 [2024-04-24 22:11:50.225891] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6acd00) 00:20:08.034 [2024-04-24 22:11:50.225903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.034 [2024-04-24 22:11:50.225916] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.034 [2024-04-24 22:11:50.225924] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.034 [2024-04-24 22:11:50.225931] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6acd00) 00:20:08.034 [2024-04-24 22:11:50.225941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.034 [2024-04-24 22:11:50.225971] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70c440, cid 4, qid 0 00:20:08.034 [2024-04-24 22:11:50.225985] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70c5a0, cid 5, qid 0 00:20:08.034 [2024-04-24 22:11:50.226219] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:08.034 [2024-04-24 22:11:50.226236] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:08.034 [2024-04-24 22:11:50.226243] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:08.034 [2024-04-24 22:11:50.226250] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6acd00): datao=0, datal=1024, cccid=4 00:20:08.034 [2024-04-24 22:11:50.226259] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x70c440) on tqpair(0x6acd00): expected_datao=0, payload_size=1024 00:20:08.034 [2024-04-24 22:11:50.226267] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.034 [2024-04-24 22:11:50.226278] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:08.034 [2024-04-24 22:11:50.226286] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:08.034 [2024-04-24 22:11:50.226295] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.034 [2024-04-24 22:11:50.226305] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.034 [2024-04-24 22:11:50.226312] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.034 [2024-04-24 22:11:50.226319] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70c5a0) on tqpair=0x6acd00 00:20:08.034 [2024-04-24 22:11:50.266544] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.034 [2024-04-24 22:11:50.266570] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.034 [2024-04-24 22:11:50.266578] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.034 [2024-04-24 22:11:50.266586] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70c440) on tqpair=0x6acd00 00:20:08.034 [2024-04-24 22:11:50.266607] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.034 [2024-04-24 22:11:50.266617] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6acd00) 00:20:08.034 [2024-04-24 22:11:50.266630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.034 [2024-04-24 22:11:50.266663] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70c440, cid 4, qid 0 00:20:08.034 [2024-04-24 22:11:50.266825] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:08.034 [2024-04-24 22:11:50.266848] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:08.034 [2024-04-24 22:11:50.266857] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:08.034 [2024-04-24 22:11:50.266864] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6acd00): datao=0, datal=3072, cccid=4 00:20:08.034 [2024-04-24 22:11:50.266873] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x70c440) on tqpair(0x6acd00): expected_datao=0, payload_size=3072 00:20:08.034 [2024-04-24 22:11:50.266882] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.034 [2024-04-24 22:11:50.266893] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:08.034 [2024-04-24 22:11:50.266901] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:08.034 [2024-04-24 22:11:50.266923] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.034 [2024-04-24 22:11:50.266935] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.034 [2024-04-24 22:11:50.266943] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.034 [2024-04-24 22:11:50.266950] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70c440) on tqpair=0x6acd00 00:20:08.034 [2024-04-24 22:11:50.266967] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.034 [2024-04-24 22:11:50.266977] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6acd00) 00:20:08.034 [2024-04-24 22:11:50.266989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.034 [2024-04-24 22:11:50.267020] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70c440, cid 4, qid 0 00:20:08.034 [2024-04-24 22:11:50.267193] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:08.034 [2024-04-24 22:11:50.267207] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:08.034 [2024-04-24 22:11:50.267214] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:08.034 [2024-04-24 22:11:50.267221] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6acd00): datao=0, datal=8, cccid=4 00:20:08.034 [2024-04-24 22:11:50.267230] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x70c440) on tqpair(0x6acd00): expected_datao=0, payload_size=8 00:20:08.034 [2024-04-24 22:11:50.267238] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.034 [2024-04-24 22:11:50.267249] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:08.034 [2024-04-24 22:11:50.267257] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:08.297 [2024-04-24 22:11:50.311411] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.297 [2024-04-24 22:11:50.311432] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.297 [2024-04-24 22:11:50.311440] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.297 [2024-04-24 22:11:50.311447] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70c440) on tqpair=0x6acd00 00:20:08.297 ===================================================== 00:20:08.297 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:08.297 ===================================================== 00:20:08.297 Controller Capabilities/Features 00:20:08.297 ================================ 00:20:08.297 Vendor ID: 0000 00:20:08.297 Subsystem Vendor ID: 0000 00:20:08.297 Serial Number: .................... 00:20:08.297 Model Number: ........................................ 00:20:08.297 Firmware Version: 24.05 00:20:08.297 Recommended Arb Burst: 0 00:20:08.297 IEEE OUI Identifier: 00 00 00 00:20:08.297 Multi-path I/O 00:20:08.297 May have multiple subsystem ports: No 00:20:08.297 May have multiple controllers: No 00:20:08.297 Associated with SR-IOV VF: No 00:20:08.297 Max Data Transfer Size: 131072 00:20:08.297 Max Number of Namespaces: 0 00:20:08.297 Max Number of I/O Queues: 1024 00:20:08.297 NVMe Specification Version (VS): 1.3 00:20:08.297 NVMe Specification Version (Identify): 1.3 00:20:08.297 Maximum Queue Entries: 128 00:20:08.297 Contiguous Queues Required: Yes 00:20:08.297 Arbitration Mechanisms Supported 00:20:08.297 Weighted Round Robin: Not Supported 00:20:08.297 Vendor Specific: Not Supported 00:20:08.297 Reset Timeout: 15000 ms 00:20:08.297 Doorbell Stride: 4 bytes 00:20:08.297 NVM Subsystem Reset: Not Supported 00:20:08.297 Command Sets Supported 00:20:08.297 NVM Command Set: Supported 00:20:08.297 Boot Partition: Not Supported 00:20:08.297 Memory Page Size Minimum: 4096 bytes 00:20:08.297 Memory Page Size Maximum: 4096 bytes 00:20:08.297 Persistent Memory Region: Not Supported 00:20:08.297 Optional Asynchronous Events Supported 00:20:08.297 Namespace Attribute Notices: Not Supported 00:20:08.297 Firmware Activation Notices: Not Supported 00:20:08.297 ANA Change Notices: Not Supported 00:20:08.297 PLE Aggregate Log Change Notices: Not Supported 00:20:08.297 LBA Status Info Alert Notices: Not Supported 00:20:08.297 EGE Aggregate Log Change Notices: Not Supported 00:20:08.297 Normal NVM Subsystem Shutdown event: Not Supported 00:20:08.297 Zone Descriptor Change Notices: Not Supported 00:20:08.297 Discovery Log Change Notices: Supported 00:20:08.297 Controller Attributes 00:20:08.297 128-bit Host Identifier: Not Supported 00:20:08.297 Non-Operational Permissive Mode: Not Supported 00:20:08.297 NVM Sets: Not Supported 00:20:08.297 Read Recovery Levels: Not Supported 00:20:08.297 Endurance Groups: Not Supported 00:20:08.297 Predictable Latency Mode: Not Supported 00:20:08.297 Traffic Based Keep ALive: Not Supported 00:20:08.297 Namespace Granularity: Not Supported 00:20:08.297 SQ Associations: Not Supported 00:20:08.297 UUID List: Not Supported 00:20:08.297 Multi-Domain Subsystem: Not Supported 00:20:08.297 Fixed Capacity Management: Not Supported 00:20:08.297 Variable Capacity Management: Not Supported 00:20:08.297 Delete Endurance Group: Not Supported 00:20:08.298 Delete NVM Set: Not Supported 00:20:08.298 Extended LBA Formats Supported: Not Supported 00:20:08.298 Flexible Data Placement Supported: Not Supported 00:20:08.298 00:20:08.298 Controller Memory Buffer Support 00:20:08.298 ================================ 00:20:08.298 Supported: No 00:20:08.298 00:20:08.298 Persistent Memory Region Support 00:20:08.298 ================================ 00:20:08.298 Supported: No 00:20:08.298 00:20:08.298 Admin Command Set Attributes 00:20:08.298 ============================ 00:20:08.298 Security Send/Receive: Not Supported 00:20:08.298 Format NVM: Not Supported 00:20:08.298 Firmware Activate/Download: Not Supported 00:20:08.298 Namespace Management: Not Supported 00:20:08.298 Device Self-Test: Not Supported 00:20:08.298 Directives: Not Supported 00:20:08.298 NVMe-MI: Not Supported 00:20:08.298 Virtualization Management: Not Supported 00:20:08.298 Doorbell Buffer Config: Not Supported 00:20:08.298 Get LBA Status Capability: Not Supported 00:20:08.298 Command & Feature Lockdown Capability: Not Supported 00:20:08.298 Abort Command Limit: 1 00:20:08.298 Async Event Request Limit: 4 00:20:08.298 Number of Firmware Slots: N/A 00:20:08.298 Firmware Slot 1 Read-Only: N/A 00:20:08.298 Firmware Activation Without Reset: N/A 00:20:08.298 Multiple Update Detection Support: N/A 00:20:08.298 Firmware Update Granularity: No Information Provided 00:20:08.298 Per-Namespace SMART Log: No 00:20:08.298 Asymmetric Namespace Access Log Page: Not Supported 00:20:08.298 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:08.298 Command Effects Log Page: Not Supported 00:20:08.298 Get Log Page Extended Data: Supported 00:20:08.298 Telemetry Log Pages: Not Supported 00:20:08.298 Persistent Event Log Pages: Not Supported 00:20:08.298 Supported Log Pages Log Page: May Support 00:20:08.298 Commands Supported & Effects Log Page: Not Supported 00:20:08.298 Feature Identifiers & Effects Log Page:May Support 00:20:08.298 NVMe-MI Commands & Effects Log Page: May Support 00:20:08.298 Data Area 4 for Telemetry Log: Not Supported 00:20:08.298 Error Log Page Entries Supported: 128 00:20:08.298 Keep Alive: Not Supported 00:20:08.298 00:20:08.298 NVM Command Set Attributes 00:20:08.298 ========================== 00:20:08.298 Submission Queue Entry Size 00:20:08.298 Max: 1 00:20:08.298 Min: 1 00:20:08.298 Completion Queue Entry Size 00:20:08.298 Max: 1 00:20:08.298 Min: 1 00:20:08.298 Number of Namespaces: 0 00:20:08.298 Compare Command: Not Supported 00:20:08.298 Write Uncorrectable Command: Not Supported 00:20:08.298 Dataset Management Command: Not Supported 00:20:08.298 Write Zeroes Command: Not Supported 00:20:08.298 Set Features Save Field: Not Supported 00:20:08.298 Reservations: Not Supported 00:20:08.298 Timestamp: Not Supported 00:20:08.298 Copy: Not Supported 00:20:08.298 Volatile Write Cache: Not Present 00:20:08.298 Atomic Write Unit (Normal): 1 00:20:08.298 Atomic Write Unit (PFail): 1 00:20:08.298 Atomic Compare & Write Unit: 1 00:20:08.298 Fused Compare & Write: Supported 00:20:08.298 Scatter-Gather List 00:20:08.298 SGL Command Set: Supported 00:20:08.298 SGL Keyed: Supported 00:20:08.298 SGL Bit Bucket Descriptor: Not Supported 00:20:08.298 SGL Metadata Pointer: Not Supported 00:20:08.298 Oversized SGL: Not Supported 00:20:08.298 SGL Metadata Address: Not Supported 00:20:08.298 SGL Offset: Supported 00:20:08.298 Transport SGL Data Block: Not Supported 00:20:08.298 Replay Protected Memory Block: Not Supported 00:20:08.298 00:20:08.298 Firmware Slot Information 00:20:08.298 ========================= 00:20:08.298 Active slot: 0 00:20:08.298 00:20:08.298 00:20:08.298 Error Log 00:20:08.298 ========= 00:20:08.298 00:20:08.298 Active Namespaces 00:20:08.298 ================= 00:20:08.298 Discovery Log Page 00:20:08.298 ================== 00:20:08.298 Generation Counter: 2 00:20:08.298 Number of Records: 2 00:20:08.298 Record Format: 0 00:20:08.298 00:20:08.298 Discovery Log Entry 0 00:20:08.298 ---------------------- 00:20:08.298 Transport Type: 3 (TCP) 00:20:08.298 Address Family: 1 (IPv4) 00:20:08.298 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:08.298 Entry Flags: 00:20:08.298 Duplicate Returned Information: 1 00:20:08.298 Explicit Persistent Connection Support for Discovery: 1 00:20:08.298 Transport Requirements: 00:20:08.298 Secure Channel: Not Required 00:20:08.298 Port ID: 0 (0x0000) 00:20:08.298 Controller ID: 65535 (0xffff) 00:20:08.298 Admin Max SQ Size: 128 00:20:08.298 Transport Service Identifier: 4420 00:20:08.298 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:08.298 Transport Address: 10.0.0.2 00:20:08.298 Discovery Log Entry 1 00:20:08.298 ---------------------- 00:20:08.298 Transport Type: 3 (TCP) 00:20:08.298 Address Family: 1 (IPv4) 00:20:08.298 Subsystem Type: 2 (NVM Subsystem) 00:20:08.298 Entry Flags: 00:20:08.298 Duplicate Returned Information: 0 00:20:08.298 Explicit Persistent Connection Support for Discovery: 0 00:20:08.298 Transport Requirements: 00:20:08.298 Secure Channel: Not Required 00:20:08.298 Port ID: 0 (0x0000) 00:20:08.298 Controller ID: 65535 (0xffff) 00:20:08.298 Admin Max SQ Size: 128 00:20:08.298 Transport Service Identifier: 4420 00:20:08.298 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:08.298 Transport Address: 10.0.0.2 [2024-04-24 22:11:50.311579] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:08.298 [2024-04-24 22:11:50.311608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.298 [2024-04-24 22:11:50.311622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.298 [2024-04-24 22:11:50.311633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.298 [2024-04-24 22:11:50.311644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.298 [2024-04-24 22:11:50.311659] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.298 [2024-04-24 22:11:50.311668] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.298 [2024-04-24 22:11:50.311675] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6acd00) 00:20:08.298 [2024-04-24 22:11:50.311691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.298 [2024-04-24 22:11:50.311721] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70c2e0, cid 3, qid 0 00:20:08.298 [2024-04-24 22:11:50.311867] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.298 [2024-04-24 22:11:50.311884] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.298 [2024-04-24 22:11:50.311891] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.298 [2024-04-24 22:11:50.311899] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70c2e0) on tqpair=0x6acd00 00:20:08.298 [2024-04-24 22:11:50.311912] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.298 [2024-04-24 22:11:50.311921] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.298 [2024-04-24 22:11:50.311928] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6acd00) 00:20:08.298 [2024-04-24 22:11:50.311940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.298 [2024-04-24 22:11:50.311969] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70c2e0, cid 3, qid 0 00:20:08.298 [2024-04-24 22:11:50.312122] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.298 [2024-04-24 22:11:50.312135] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.298 [2024-04-24 22:11:50.312143] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.298 [2024-04-24 22:11:50.312151] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70c2e0) on tqpair=0x6acd00 00:20:08.298 [2024-04-24 22:11:50.312160] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:08.298 [2024-04-24 22:11:50.312171] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:08.298 [2024-04-24 22:11:50.312188] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.298 [2024-04-24 22:11:50.312199] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.298 [2024-04-24 22:11:50.312206] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6acd00) 00:20:08.298 [2024-04-24 22:11:50.312218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.298 [2024-04-24 22:11:50.312241] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70c2e0, cid 3, qid 0 00:20:08.298 [2024-04-24 22:11:50.312401] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.298 [2024-04-24 22:11:50.312416] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.298 [2024-04-24 22:11:50.312424] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.298 [2024-04-24 22:11:50.312431] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70c2e0) on tqpair=0x6acd00 00:20:08.298 [2024-04-24 22:11:50.312450] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.298 [2024-04-24 22:11:50.312460] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.298 [2024-04-24 22:11:50.312468] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6acd00) 00:20:08.298 [2024-04-24 22:11:50.312479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.299 [2024-04-24 22:11:50.312502] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70c2e0, cid 3, qid 0 00:20:08.299 [2024-04-24 22:11:50.312626] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.299 [2024-04-24 22:11:50.312643] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.299 [2024-04-24 22:11:50.312651] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.312658] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70c2e0) on tqpair=0x6acd00 00:20:08.299 [2024-04-24 22:11:50.312677] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.312691] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.312700] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6acd00) 00:20:08.299 [2024-04-24 22:11:50.312711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.299 [2024-04-24 22:11:50.312735] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70c2e0, cid 3, qid 0 00:20:08.299 [2024-04-24 22:11:50.312886] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.299 [2024-04-24 22:11:50.312903] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.299 [2024-04-24 22:11:50.312910] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.312918] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70c2e0) on tqpair=0x6acd00 00:20:08.299 [2024-04-24 22:11:50.312936] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.312947] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.312954] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6acd00) 00:20:08.299 [2024-04-24 22:11:50.312966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.299 [2024-04-24 22:11:50.312988] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70c2e0, cid 3, qid 0 00:20:08.299 [2024-04-24 22:11:50.313116] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.299 [2024-04-24 22:11:50.313129] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.299 [2024-04-24 22:11:50.313136] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.313144] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70c2e0) on tqpair=0x6acd00 00:20:08.299 [2024-04-24 22:11:50.313161] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.313171] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.313178] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6acd00) 00:20:08.299 [2024-04-24 22:11:50.313190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.299 [2024-04-24 22:11:50.313212] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70c2e0, cid 3, qid 0 00:20:08.299 [2024-04-24 22:11:50.313345] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.299 [2024-04-24 22:11:50.313361] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.299 [2024-04-24 22:11:50.313369] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.313376] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70c2e0) on tqpair=0x6acd00 00:20:08.299 [2024-04-24 22:11:50.313406] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.313419] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.313426] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6acd00) 00:20:08.299 [2024-04-24 22:11:50.313438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.299 [2024-04-24 22:11:50.313461] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70c2e0, cid 3, qid 0 00:20:08.299 [2024-04-24 22:11:50.313585] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.299 [2024-04-24 22:11:50.313601] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.299 [2024-04-24 22:11:50.313609] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.313616] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70c2e0) on tqpair=0x6acd00 00:20:08.299 [2024-04-24 22:11:50.313634] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.313645] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.313656] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6acd00) 00:20:08.299 [2024-04-24 22:11:50.313669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.299 [2024-04-24 22:11:50.313692] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70c2e0, cid 3, qid 0 00:20:08.299 [2024-04-24 22:11:50.313851] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.299 [2024-04-24 22:11:50.313868] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.299 [2024-04-24 22:11:50.313875] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.313883] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70c2e0) on tqpair=0x6acd00 00:20:08.299 [2024-04-24 22:11:50.313901] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.313911] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.313918] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6acd00) 00:20:08.299 [2024-04-24 22:11:50.313930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.299 [2024-04-24 22:11:50.313953] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70c2e0, cid 3, qid 0 00:20:08.299 [2024-04-24 22:11:50.314078] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.299 [2024-04-24 22:11:50.314095] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.299 [2024-04-24 22:11:50.314103] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.314110] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70c2e0) on tqpair=0x6acd00 00:20:08.299 [2024-04-24 22:11:50.314128] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.314138] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.314145] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6acd00) 00:20:08.299 [2024-04-24 22:11:50.314157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.299 [2024-04-24 22:11:50.314180] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70c2e0, cid 3, qid 0 00:20:08.299 [2024-04-24 22:11:50.314305] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.299 [2024-04-24 22:11:50.314318] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.299 [2024-04-24 22:11:50.314326] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.314333] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70c2e0) on tqpair=0x6acd00 00:20:08.299 [2024-04-24 22:11:50.314350] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.314360] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.314368] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6acd00) 00:20:08.299 [2024-04-24 22:11:50.314379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.299 [2024-04-24 22:11:50.314410] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70c2e0, cid 3, qid 0 00:20:08.299 [2024-04-24 22:11:50.314534] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.299 [2024-04-24 22:11:50.314547] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.299 [2024-04-24 22:11:50.314555] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.314562] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70c2e0) on tqpair=0x6acd00 00:20:08.299 [2024-04-24 22:11:50.314580] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.314590] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.314598] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6acd00) 00:20:08.299 [2024-04-24 22:11:50.314614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.299 [2024-04-24 22:11:50.314638] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70c2e0, cid 3, qid 0 00:20:08.299 [2024-04-24 22:11:50.314793] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.299 [2024-04-24 22:11:50.314809] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.299 [2024-04-24 22:11:50.314816] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.314824] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70c2e0) on tqpair=0x6acd00 00:20:08.299 [2024-04-24 22:11:50.314842] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.314853] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.314860] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6acd00) 00:20:08.299 [2024-04-24 22:11:50.314871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.299 [2024-04-24 22:11:50.314894] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70c2e0, cid 3, qid 0 00:20:08.299 [2024-04-24 22:11:50.315017] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.299 [2024-04-24 22:11:50.315034] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.299 [2024-04-24 22:11:50.315042] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.315049] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70c2e0) on tqpair=0x6acd00 00:20:08.299 [2024-04-24 22:11:50.315067] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.315077] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.315085] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6acd00) 00:20:08.299 [2024-04-24 22:11:50.315096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.299 [2024-04-24 22:11:50.315119] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70c2e0, cid 3, qid 0 00:20:08.299 [2024-04-24 22:11:50.315249] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.299 [2024-04-24 22:11:50.315262] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.299 [2024-04-24 22:11:50.315269] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.315277] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70c2e0) on tqpair=0x6acd00 00:20:08.299 [2024-04-24 22:11:50.315294] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.315304] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.299 [2024-04-24 22:11:50.315311] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6acd00) 00:20:08.299 [2024-04-24 22:11:50.315323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.300 [2024-04-24 22:11:50.315345] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70c2e0, cid 3, qid 0 00:20:08.300 [2024-04-24 22:11:50.319413] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.300 [2024-04-24 22:11:50.319431] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.300 [2024-04-24 22:11:50.319439] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.300 [2024-04-24 22:11:50.319447] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70c2e0) on tqpair=0x6acd00 00:20:08.300 [2024-04-24 22:11:50.319466] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.300 [2024-04-24 22:11:50.319476] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.300 [2024-04-24 22:11:50.319484] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6acd00) 00:20:08.300 [2024-04-24 22:11:50.319496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.300 [2024-04-24 22:11:50.319525] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x70c2e0, cid 3, qid 0 00:20:08.300 [2024-04-24 22:11:50.319655] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.300 [2024-04-24 22:11:50.319672] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.300 [2024-04-24 22:11:50.319679] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.300 [2024-04-24 22:11:50.319687] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x70c2e0) on tqpair=0x6acd00 00:20:08.300 [2024-04-24 22:11:50.319702] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:20:08.300 00:20:08.300 22:11:50 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:08.300 [2024-04-24 22:11:50.354229] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:20:08.300 [2024-04-24 22:11:50.354273] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3992163 ] 00:20:08.300 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.300 [2024-04-24 22:11:50.391430] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:08.300 [2024-04-24 22:11:50.391487] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:08.300 [2024-04-24 22:11:50.391498] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:08.300 [2024-04-24 22:11:50.391514] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:08.300 [2024-04-24 22:11:50.391526] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:08.300 [2024-04-24 22:11:50.391761] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:08.300 [2024-04-24 22:11:50.391808] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1638d00 0 00:20:08.300 [2024-04-24 22:11:50.398408] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:08.300 [2024-04-24 22:11:50.398430] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:08.300 [2024-04-24 22:11:50.398438] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:08.300 [2024-04-24 22:11:50.398445] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:08.300 [2024-04-24 22:11:50.398488] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.300 [2024-04-24 22:11:50.398501] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.300 [2024-04-24 22:11:50.398508] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1638d00) 00:20:08.300 [2024-04-24 22:11:50.398524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:08.300 [2024-04-24 22:11:50.398553] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1697ec0, cid 0, qid 0 00:20:08.300 [2024-04-24 22:11:50.405409] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.300 [2024-04-24 22:11:50.405440] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.300 [2024-04-24 22:11:50.405449] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.300 [2024-04-24 22:11:50.405456] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1697ec0) on tqpair=0x1638d00 00:20:08.300 [2024-04-24 22:11:50.405474] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:08.300 [2024-04-24 22:11:50.405486] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:08.300 [2024-04-24 22:11:50.405501] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:08.300 [2024-04-24 22:11:50.405521] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.300 [2024-04-24 22:11:50.405531] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.300 [2024-04-24 22:11:50.405538] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1638d00) 00:20:08.300 [2024-04-24 22:11:50.405551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.300 [2024-04-24 22:11:50.405577] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1697ec0, cid 0, qid 0 00:20:08.300 [2024-04-24 22:11:50.405763] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.300 [2024-04-24 22:11:50.405776] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.300 [2024-04-24 22:11:50.405784] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.300 [2024-04-24 22:11:50.405791] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1697ec0) on tqpair=0x1638d00 00:20:08.300 [2024-04-24 22:11:50.405801] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:08.300 [2024-04-24 22:11:50.405816] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:08.300 [2024-04-24 22:11:50.405829] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.300 [2024-04-24 22:11:50.405837] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.300 [2024-04-24 22:11:50.405844] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1638d00) 00:20:08.300 [2024-04-24 22:11:50.405856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.300 [2024-04-24 22:11:50.405880] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1697ec0, cid 0, qid 0 00:20:08.300 [2024-04-24 22:11:50.406038] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.300 [2024-04-24 22:11:50.406054] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.300 [2024-04-24 22:11:50.406062] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.300 [2024-04-24 22:11:50.406069] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1697ec0) on tqpair=0x1638d00 00:20:08.300 [2024-04-24 22:11:50.406080] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:08.300 [2024-04-24 22:11:50.406095] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:08.300 [2024-04-24 22:11:50.406108] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.300 [2024-04-24 22:11:50.406116] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.300 [2024-04-24 22:11:50.406123] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1638d00) 00:20:08.300 [2024-04-24 22:11:50.406135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.300 [2024-04-24 22:11:50.406159] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1697ec0, cid 0, qid 0 00:20:08.300 [2024-04-24 22:11:50.406342] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.300 [2024-04-24 22:11:50.406355] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.300 [2024-04-24 22:11:50.406363] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.300 [2024-04-24 22:11:50.406370] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1697ec0) on tqpair=0x1638d00 00:20:08.300 [2024-04-24 22:11:50.406381] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:08.300 [2024-04-24 22:11:50.406407] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.300 [2024-04-24 22:11:50.406423] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.300 [2024-04-24 22:11:50.406431] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1638d00) 00:20:08.300 [2024-04-24 22:11:50.406443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.300 [2024-04-24 22:11:50.406467] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1697ec0, cid 0, qid 0 00:20:08.300 [2024-04-24 22:11:50.406634] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.300 [2024-04-24 22:11:50.406651] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.300 [2024-04-24 22:11:50.406658] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.300 [2024-04-24 22:11:50.406665] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1697ec0) on tqpair=0x1638d00 00:20:08.300 [2024-04-24 22:11:50.406675] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:08.300 [2024-04-24 22:11:50.406684] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:08.300 [2024-04-24 22:11:50.406699] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:08.300 [2024-04-24 22:11:50.406809] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:08.300 [2024-04-24 22:11:50.406817] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:08.300 [2024-04-24 22:11:50.406830] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.300 [2024-04-24 22:11:50.406838] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.300 [2024-04-24 22:11:50.406845] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1638d00) 00:20:08.300 [2024-04-24 22:11:50.406857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.300 [2024-04-24 22:11:50.406880] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1697ec0, cid 0, qid 0 00:20:08.300 [2024-04-24 22:11:50.407043] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.300 [2024-04-24 22:11:50.407060] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.300 [2024-04-24 22:11:50.407067] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.300 [2024-04-24 22:11:50.407075] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1697ec0) on tqpair=0x1638d00 00:20:08.300 [2024-04-24 22:11:50.407085] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:08.301 [2024-04-24 22:11:50.407103] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.301 [2024-04-24 22:11:50.407113] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.301 [2024-04-24 22:11:50.407121] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1638d00) 00:20:08.301 [2024-04-24 22:11:50.407132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.301 [2024-04-24 22:11:50.407155] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1697ec0, cid 0, qid 0 00:20:08.301 [2024-04-24 22:11:50.407306] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.301 [2024-04-24 22:11:50.407323] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.301 [2024-04-24 22:11:50.407330] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.301 [2024-04-24 22:11:50.407338] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1697ec0) on tqpair=0x1638d00 00:20:08.301 [2024-04-24 22:11:50.407347] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:08.301 [2024-04-24 22:11:50.407360] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:08.301 [2024-04-24 22:11:50.407377] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:08.301 [2024-04-24 22:11:50.407402] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:08.301 [2024-04-24 22:11:50.407422] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.301 [2024-04-24 22:11:50.407432] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1638d00) 00:20:08.301 [2024-04-24 22:11:50.407445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.301 [2024-04-24 22:11:50.407469] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1697ec0, cid 0, qid 0 00:20:08.301 [2024-04-24 22:11:50.407721] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:08.301 [2024-04-24 22:11:50.407735] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:08.301 [2024-04-24 22:11:50.407742] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:08.301 [2024-04-24 22:11:50.407749] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1638d00): datao=0, datal=4096, cccid=0 00:20:08.301 [2024-04-24 22:11:50.407757] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1697ec0) on tqpair(0x1638d00): expected_datao=0, payload_size=4096 00:20:08.301 [2024-04-24 22:11:50.407766] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.301 [2024-04-24 22:11:50.407777] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:08.301 [2024-04-24 22:11:50.407786] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:08.301 [2024-04-24 22:11:50.407820] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.301 [2024-04-24 22:11:50.407833] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.301 [2024-04-24 22:11:50.407840] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.301 [2024-04-24 22:11:50.407847] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1697ec0) on tqpair=0x1638d00 00:20:08.301 [2024-04-24 22:11:50.407861] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:08.301 [2024-04-24 22:11:50.407871] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:08.301 [2024-04-24 22:11:50.407879] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:08.301 [2024-04-24 22:11:50.407886] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:08.301 [2024-04-24 22:11:50.407894] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:08.301 [2024-04-24 22:11:50.407903] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:08.301 [2024-04-24 22:11:50.407919] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:08.301 [2024-04-24 22:11:50.407932] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.301 [2024-04-24 22:11:50.407940] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.301 [2024-04-24 22:11:50.407947] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1638d00) 00:20:08.301 [2024-04-24 22:11:50.407959] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:08.301 [2024-04-24 22:11:50.407983] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1697ec0, cid 0, qid 0 00:20:08.301 [2024-04-24 22:11:50.408149] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.301 [2024-04-24 22:11:50.408165] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.301 [2024-04-24 22:11:50.408177] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.301 [2024-04-24 22:11:50.408185] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1697ec0) on tqpair=0x1638d00 00:20:08.301 [2024-04-24 22:11:50.408198] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.301 [2024-04-24 22:11:50.408206] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.301 [2024-04-24 22:11:50.408213] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1638d00) 00:20:08.301 [2024-04-24 22:11:50.408224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.301 [2024-04-24 22:11:50.408235] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.301 [2024-04-24 22:11:50.408243] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.301 [2024-04-24 22:11:50.408250] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1638d00) 00:20:08.301 [2024-04-24 22:11:50.408260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.301 [2024-04-24 22:11:50.408271] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.301 [2024-04-24 22:11:50.408278] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.301 [2024-04-24 22:11:50.408285] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1638d00) 00:20:08.301 [2024-04-24 22:11:50.408295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.301 [2024-04-24 22:11:50.408306] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.301 [2024-04-24 22:11:50.408313] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.301 [2024-04-24 22:11:50.408320] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1638d00) 00:20:08.301 [2024-04-24 22:11:50.408330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.301 [2024-04-24 22:11:50.408339] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:08.301 [2024-04-24 22:11:50.408360] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:08.301 [2024-04-24 22:11:50.408374] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.301 [2024-04-24 22:11:50.408382] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1638d00) 00:20:08.301 [2024-04-24 22:11:50.408400] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.301 [2024-04-24 22:11:50.408427] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1697ec0, cid 0, qid 0 00:20:08.301 [2024-04-24 22:11:50.408439] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1698020, cid 1, qid 0 00:20:08.301 [2024-04-24 22:11:50.408448] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1698180, cid 2, qid 0 00:20:08.301 [2024-04-24 22:11:50.408457] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16982e0, cid 3, qid 0 00:20:08.301 [2024-04-24 22:11:50.408465] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1698440, cid 4, qid 0 00:20:08.301 [2024-04-24 22:11:50.408680] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.301 [2024-04-24 22:11:50.408696] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.301 [2024-04-24 22:11:50.408704] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.301 [2024-04-24 22:11:50.408711] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1698440) on tqpair=0x1638d00 00:20:08.301 [2024-04-24 22:11:50.408721] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:08.301 [2024-04-24 22:11:50.408735] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:08.301 [2024-04-24 22:11:50.408756] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:08.301 [2024-04-24 22:11:50.408770] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:08.301 [2024-04-24 22:11:50.408782] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.301 [2024-04-24 22:11:50.408790] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.301 [2024-04-24 22:11:50.408797] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1638d00) 00:20:08.301 [2024-04-24 22:11:50.408809] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:08.301 [2024-04-24 22:11:50.408832] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1698440, cid 4, qid 0 00:20:08.301 [2024-04-24 22:11:50.408996] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.301 [2024-04-24 22:11:50.409012] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.301 [2024-04-24 22:11:50.409020] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.301 [2024-04-24 22:11:50.409027] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1698440) on tqpair=0x1638d00 00:20:08.301 [2024-04-24 22:11:50.409088] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:08.301 [2024-04-24 22:11:50.409109] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:08.301 [2024-04-24 22:11:50.409126] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.301 [2024-04-24 22:11:50.409135] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1638d00) 00:20:08.301 [2024-04-24 22:11:50.409146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.301 [2024-04-24 22:11:50.409170] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1698440, cid 4, qid 0 00:20:08.301 [2024-04-24 22:11:50.409343] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:08.301 [2024-04-24 22:11:50.409361] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:08.301 [2024-04-24 22:11:50.409368] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:08.301 [2024-04-24 22:11:50.409375] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1638d00): datao=0, datal=4096, cccid=4 00:20:08.301 [2024-04-24 22:11:50.409383] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1698440) on tqpair(0x1638d00): expected_datao=0, payload_size=4096 00:20:08.302 [2024-04-24 22:11:50.409392] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.302 [2024-04-24 22:11:50.413418] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:08.302 [2024-04-24 22:11:50.413428] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:08.302 [2024-04-24 22:11:50.413442] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.302 [2024-04-24 22:11:50.413454] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.302 [2024-04-24 22:11:50.413461] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.302 [2024-04-24 22:11:50.413468] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1698440) on tqpair=0x1638d00 00:20:08.302 [2024-04-24 22:11:50.413488] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:08.302 [2024-04-24 22:11:50.413508] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:08.302 [2024-04-24 22:11:50.413528] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:08.302 [2024-04-24 22:11:50.413548] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.302 [2024-04-24 22:11:50.413557] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1638d00) 00:20:08.302 [2024-04-24 22:11:50.413569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.302 [2024-04-24 22:11:50.413594] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1698440, cid 4, qid 0 00:20:08.302 [2024-04-24 22:11:50.413783] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:08.302 [2024-04-24 22:11:50.413800] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:08.302 [2024-04-24 22:11:50.413807] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:08.302 [2024-04-24 22:11:50.413814] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1638d00): datao=0, datal=4096, cccid=4 00:20:08.302 [2024-04-24 22:11:50.413822] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1698440) on tqpair(0x1638d00): expected_datao=0, payload_size=4096 00:20:08.302 [2024-04-24 22:11:50.413830] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.302 [2024-04-24 22:11:50.413864] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:08.302 [2024-04-24 22:11:50.413874] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:08.302 [2024-04-24 22:11:50.414033] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.302 [2024-04-24 22:11:50.414046] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.302 [2024-04-24 22:11:50.414053] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.302 [2024-04-24 22:11:50.414060] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1698440) on tqpair=0x1638d00 00:20:08.302 [2024-04-24 22:11:50.414085] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:08.302 [2024-04-24 22:11:50.414106] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:08.302 [2024-04-24 22:11:50.414121] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.302 [2024-04-24 22:11:50.414130] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1638d00) 00:20:08.302 [2024-04-24 22:11:50.414142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.302 [2024-04-24 22:11:50.414165] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1698440, cid 4, qid 0 00:20:08.302 [2024-04-24 22:11:50.414328] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:08.302 [2024-04-24 22:11:50.414344] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:08.302 [2024-04-24 22:11:50.414351] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:08.302 [2024-04-24 22:11:50.414358] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1638d00): datao=0, datal=4096, cccid=4 00:20:08.302 [2024-04-24 22:11:50.414367] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1698440) on tqpair(0x1638d00): expected_datao=0, payload_size=4096 00:20:08.302 [2024-04-24 22:11:50.414375] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.302 [2024-04-24 22:11:50.414404] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:08.302 [2024-04-24 22:11:50.414415] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:08.302 [2024-04-24 22:11:50.414527] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.302 [2024-04-24 22:11:50.414544] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.302 [2024-04-24 22:11:50.414551] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.302 [2024-04-24 22:11:50.414559] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1698440) on tqpair=0x1638d00 00:20:08.302 [2024-04-24 22:11:50.414576] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:08.302 [2024-04-24 22:11:50.414599] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:08.302 [2024-04-24 22:11:50.414618] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:08.302 [2024-04-24 22:11:50.414631] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:08.302 [2024-04-24 22:11:50.414640] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:08.302 [2024-04-24 22:11:50.414650] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:08.302 [2024-04-24 22:11:50.414658] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:08.302 [2024-04-24 22:11:50.414668] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:08.302 [2024-04-24 22:11:50.414689] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.302 [2024-04-24 22:11:50.414699] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1638d00) 00:20:08.302 [2024-04-24 22:11:50.414711] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.302 [2024-04-24 22:11:50.414723] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.302 [2024-04-24 22:11:50.414731] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.302 [2024-04-24 22:11:50.414738] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1638d00) 00:20:08.302 [2024-04-24 22:11:50.414748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.302 [2024-04-24 22:11:50.414776] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1698440, cid 4, qid 0 00:20:08.302 [2024-04-24 22:11:50.414789] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16985a0, cid 5, qid 0 00:20:08.302 [2024-04-24 22:11:50.414959] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.302 [2024-04-24 22:11:50.414976] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.302 [2024-04-24 22:11:50.414984] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.302 [2024-04-24 22:11:50.414991] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1698440) on tqpair=0x1638d00 00:20:08.302 [2024-04-24 22:11:50.415003] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.302 [2024-04-24 22:11:50.415013] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.302 [2024-04-24 22:11:50.415021] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.302 [2024-04-24 22:11:50.415028] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16985a0) on tqpair=0x1638d00 00:20:08.302 [2024-04-24 22:11:50.415046] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.302 [2024-04-24 22:11:50.415056] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1638d00) 00:20:08.302 [2024-04-24 22:11:50.415067] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.302 [2024-04-24 22:11:50.415090] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16985a0, cid 5, qid 0 00:20:08.302 [2024-04-24 22:11:50.415254] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.302 [2024-04-24 22:11:50.415267] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.302 [2024-04-24 22:11:50.415275] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.302 [2024-04-24 22:11:50.415282] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16985a0) on tqpair=0x1638d00 00:20:08.302 [2024-04-24 22:11:50.415300] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.302 [2024-04-24 22:11:50.415315] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1638d00) 00:20:08.302 [2024-04-24 22:11:50.415327] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.303 [2024-04-24 22:11:50.415349] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16985a0, cid 5, qid 0 00:20:08.303 [2024-04-24 22:11:50.415565] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.303 [2024-04-24 22:11:50.415582] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.303 [2024-04-24 22:11:50.415590] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.303 [2024-04-24 22:11:50.415597] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16985a0) on tqpair=0x1638d00 00:20:08.303 [2024-04-24 22:11:50.415617] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.303 [2024-04-24 22:11:50.415626] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1638d00) 00:20:08.303 [2024-04-24 22:11:50.415638] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.303 [2024-04-24 22:11:50.415660] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16985a0, cid 5, qid 0 00:20:08.303 [2024-04-24 22:11:50.415826] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.303 [2024-04-24 22:11:50.415839] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.303 [2024-04-24 22:11:50.415846] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.303 [2024-04-24 22:11:50.415854] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16985a0) on tqpair=0x1638d00 00:20:08.303 [2024-04-24 22:11:50.415876] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.303 [2024-04-24 22:11:50.415887] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1638d00) 00:20:08.303 [2024-04-24 22:11:50.415898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.303 [2024-04-24 22:11:50.415912] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.303 [2024-04-24 22:11:50.415920] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1638d00) 00:20:08.303 [2024-04-24 22:11:50.415931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.303 [2024-04-24 22:11:50.415944] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.303 [2024-04-24 22:11:50.415952] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1638d00) 00:20:08.303 [2024-04-24 22:11:50.415963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.303 [2024-04-24 22:11:50.415976] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.303 [2024-04-24 22:11:50.415984] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1638d00) 00:20:08.303 [2024-04-24 22:11:50.415995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.303 [2024-04-24 22:11:50.416019] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16985a0, cid 5, qid 0 00:20:08.303 [2024-04-24 22:11:50.416031] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1698440, cid 4, qid 0 00:20:08.303 [2024-04-24 22:11:50.416039] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1698700, cid 6, qid 0 00:20:08.303 [2024-04-24 22:11:50.416048] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1698860, cid 7, qid 0 00:20:08.303 [2024-04-24 22:11:50.416349] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:08.303 [2024-04-24 22:11:50.416371] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:08.303 [2024-04-24 22:11:50.416380] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:08.303 [2024-04-24 22:11:50.416387] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1638d00): datao=0, datal=8192, cccid=5 00:20:08.303 [2024-04-24 22:11:50.416403] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16985a0) on tqpair(0x1638d00): expected_datao=0, payload_size=8192 00:20:08.303 [2024-04-24 22:11:50.416412] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.303 [2024-04-24 22:11:50.416424] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:08.303 [2024-04-24 22:11:50.416432] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:08.303 [2024-04-24 22:11:50.416442] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:08.303 [2024-04-24 22:11:50.416452] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:08.303 [2024-04-24 22:11:50.416459] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:08.303 [2024-04-24 22:11:50.416466] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1638d00): datao=0, datal=512, cccid=4 00:20:08.303 [2024-04-24 22:11:50.416474] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1698440) on tqpair(0x1638d00): expected_datao=0, payload_size=512 00:20:08.303 [2024-04-24 22:11:50.416482] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.303 [2024-04-24 22:11:50.416492] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:08.303 [2024-04-24 22:11:50.416500] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:08.303 [2024-04-24 22:11:50.416509] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:08.303 [2024-04-24 22:11:50.416519] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:08.303 [2024-04-24 22:11:50.416526] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:08.303 [2024-04-24 22:11:50.416533] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1638d00): datao=0, datal=512, cccid=6 00:20:08.303 [2024-04-24 22:11:50.416542] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1698700) on tqpair(0x1638d00): expected_datao=0, payload_size=512 00:20:08.303 [2024-04-24 22:11:50.416550] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.303 [2024-04-24 22:11:50.416560] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:08.303 [2024-04-24 22:11:50.416568] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:08.303 [2024-04-24 22:11:50.416577] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:08.303 [2024-04-24 22:11:50.416587] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:08.303 [2024-04-24 22:11:50.416594] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:08.303 [2024-04-24 22:11:50.416601] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1638d00): datao=0, datal=4096, cccid=7 00:20:08.303 [2024-04-24 22:11:50.416610] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1698860) on tqpair(0x1638d00): expected_datao=0, payload_size=4096 00:20:08.303 [2024-04-24 22:11:50.416618] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.303 [2024-04-24 22:11:50.416628] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:08.303 [2024-04-24 22:11:50.416636] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:08.303 [2024-04-24 22:11:50.416649] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.303 [2024-04-24 22:11:50.416660] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.303 [2024-04-24 22:11:50.416667] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.303 [2024-04-24 22:11:50.416675] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16985a0) on tqpair=0x1638d00 00:20:08.303 [2024-04-24 22:11:50.416698] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.303 [2024-04-24 22:11:50.416711] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.303 [2024-04-24 22:11:50.416718] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.303 [2024-04-24 22:11:50.416725] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1698440) on tqpair=0x1638d00 00:20:08.303 [2024-04-24 22:11:50.416745] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.303 [2024-04-24 22:11:50.416757] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.303 [2024-04-24 22:11:50.416764] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.303 [2024-04-24 22:11:50.416771] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1698700) on tqpair=0x1638d00 00:20:08.303 [2024-04-24 22:11:50.416785] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.303 [2024-04-24 22:11:50.416795] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.303 [2024-04-24 22:11:50.416803] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.303 [2024-04-24 22:11:50.416810] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1698860) on tqpair=0x1638d00 00:20:08.303 ===================================================== 00:20:08.303 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:08.303 ===================================================== 00:20:08.303 Controller Capabilities/Features 00:20:08.303 ================================ 00:20:08.303 Vendor ID: 8086 00:20:08.303 Subsystem Vendor ID: 8086 00:20:08.303 Serial Number: SPDK00000000000001 00:20:08.303 Model Number: SPDK bdev Controller 00:20:08.303 Firmware Version: 24.05 00:20:08.303 Recommended Arb Burst: 6 00:20:08.303 IEEE OUI Identifier: e4 d2 5c 00:20:08.303 Multi-path I/O 00:20:08.303 May have multiple subsystem ports: Yes 00:20:08.303 May have multiple controllers: Yes 00:20:08.303 Associated with SR-IOV VF: No 00:20:08.303 Max Data Transfer Size: 131072 00:20:08.303 Max Number of Namespaces: 32 00:20:08.303 Max Number of I/O Queues: 127 00:20:08.303 NVMe Specification Version (VS): 1.3 00:20:08.303 NVMe Specification Version (Identify): 1.3 00:20:08.303 Maximum Queue Entries: 128 00:20:08.303 Contiguous Queues Required: Yes 00:20:08.303 Arbitration Mechanisms Supported 00:20:08.303 Weighted Round Robin: Not Supported 00:20:08.303 Vendor Specific: Not Supported 00:20:08.303 Reset Timeout: 15000 ms 00:20:08.303 Doorbell Stride: 4 bytes 00:20:08.303 NVM Subsystem Reset: Not Supported 00:20:08.303 Command Sets Supported 00:20:08.303 NVM Command Set: Supported 00:20:08.303 Boot Partition: Not Supported 00:20:08.303 Memory Page Size Minimum: 4096 bytes 00:20:08.303 Memory Page Size Maximum: 4096 bytes 00:20:08.303 Persistent Memory Region: Not Supported 00:20:08.303 Optional Asynchronous Events Supported 00:20:08.303 Namespace Attribute Notices: Supported 00:20:08.303 Firmware Activation Notices: Not Supported 00:20:08.303 ANA Change Notices: Not Supported 00:20:08.303 PLE Aggregate Log Change Notices: Not Supported 00:20:08.303 LBA Status Info Alert Notices: Not Supported 00:20:08.303 EGE Aggregate Log Change Notices: Not Supported 00:20:08.303 Normal NVM Subsystem Shutdown event: Not Supported 00:20:08.303 Zone Descriptor Change Notices: Not Supported 00:20:08.303 Discovery Log Change Notices: Not Supported 00:20:08.303 Controller Attributes 00:20:08.303 128-bit Host Identifier: Supported 00:20:08.303 Non-Operational Permissive Mode: Not Supported 00:20:08.303 NVM Sets: Not Supported 00:20:08.303 Read Recovery Levels: Not Supported 00:20:08.303 Endurance Groups: Not Supported 00:20:08.303 Predictable Latency Mode: Not Supported 00:20:08.303 Traffic Based Keep ALive: Not Supported 00:20:08.304 Namespace Granularity: Not Supported 00:20:08.304 SQ Associations: Not Supported 00:20:08.304 UUID List: Not Supported 00:20:08.304 Multi-Domain Subsystem: Not Supported 00:20:08.304 Fixed Capacity Management: Not Supported 00:20:08.304 Variable Capacity Management: Not Supported 00:20:08.304 Delete Endurance Group: Not Supported 00:20:08.304 Delete NVM Set: Not Supported 00:20:08.304 Extended LBA Formats Supported: Not Supported 00:20:08.304 Flexible Data Placement Supported: Not Supported 00:20:08.304 00:20:08.304 Controller Memory Buffer Support 00:20:08.304 ================================ 00:20:08.304 Supported: No 00:20:08.304 00:20:08.304 Persistent Memory Region Support 00:20:08.304 ================================ 00:20:08.304 Supported: No 00:20:08.304 00:20:08.304 Admin Command Set Attributes 00:20:08.304 ============================ 00:20:08.304 Security Send/Receive: Not Supported 00:20:08.304 Format NVM: Not Supported 00:20:08.304 Firmware Activate/Download: Not Supported 00:20:08.304 Namespace Management: Not Supported 00:20:08.304 Device Self-Test: Not Supported 00:20:08.304 Directives: Not Supported 00:20:08.304 NVMe-MI: Not Supported 00:20:08.304 Virtualization Management: Not Supported 00:20:08.304 Doorbell Buffer Config: Not Supported 00:20:08.304 Get LBA Status Capability: Not Supported 00:20:08.304 Command & Feature Lockdown Capability: Not Supported 00:20:08.304 Abort Command Limit: 4 00:20:08.304 Async Event Request Limit: 4 00:20:08.304 Number of Firmware Slots: N/A 00:20:08.304 Firmware Slot 1 Read-Only: N/A 00:20:08.304 Firmware Activation Without Reset: N/A 00:20:08.304 Multiple Update Detection Support: N/A 00:20:08.304 Firmware Update Granularity: No Information Provided 00:20:08.304 Per-Namespace SMART Log: No 00:20:08.304 Asymmetric Namespace Access Log Page: Not Supported 00:20:08.304 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:08.304 Command Effects Log Page: Supported 00:20:08.304 Get Log Page Extended Data: Supported 00:20:08.304 Telemetry Log Pages: Not Supported 00:20:08.304 Persistent Event Log Pages: Not Supported 00:20:08.304 Supported Log Pages Log Page: May Support 00:20:08.304 Commands Supported & Effects Log Page: Not Supported 00:20:08.304 Feature Identifiers & Effects Log Page:May Support 00:20:08.304 NVMe-MI Commands & Effects Log Page: May Support 00:20:08.304 Data Area 4 for Telemetry Log: Not Supported 00:20:08.304 Error Log Page Entries Supported: 128 00:20:08.304 Keep Alive: Supported 00:20:08.304 Keep Alive Granularity: 10000 ms 00:20:08.304 00:20:08.304 NVM Command Set Attributes 00:20:08.304 ========================== 00:20:08.304 Submission Queue Entry Size 00:20:08.304 Max: 64 00:20:08.304 Min: 64 00:20:08.304 Completion Queue Entry Size 00:20:08.304 Max: 16 00:20:08.304 Min: 16 00:20:08.304 Number of Namespaces: 32 00:20:08.304 Compare Command: Supported 00:20:08.304 Write Uncorrectable Command: Not Supported 00:20:08.304 Dataset Management Command: Supported 00:20:08.304 Write Zeroes Command: Supported 00:20:08.304 Set Features Save Field: Not Supported 00:20:08.304 Reservations: Supported 00:20:08.304 Timestamp: Not Supported 00:20:08.304 Copy: Supported 00:20:08.304 Volatile Write Cache: Present 00:20:08.304 Atomic Write Unit (Normal): 1 00:20:08.304 Atomic Write Unit (PFail): 1 00:20:08.304 Atomic Compare & Write Unit: 1 00:20:08.304 Fused Compare & Write: Supported 00:20:08.304 Scatter-Gather List 00:20:08.304 SGL Command Set: Supported 00:20:08.304 SGL Keyed: Supported 00:20:08.304 SGL Bit Bucket Descriptor: Not Supported 00:20:08.304 SGL Metadata Pointer: Not Supported 00:20:08.304 Oversized SGL: Not Supported 00:20:08.304 SGL Metadata Address: Not Supported 00:20:08.304 SGL Offset: Supported 00:20:08.304 Transport SGL Data Block: Not Supported 00:20:08.304 Replay Protected Memory Block: Not Supported 00:20:08.304 00:20:08.304 Firmware Slot Information 00:20:08.304 ========================= 00:20:08.304 Active slot: 1 00:20:08.304 Slot 1 Firmware Revision: 24.05 00:20:08.304 00:20:08.304 00:20:08.304 Commands Supported and Effects 00:20:08.304 ============================== 00:20:08.304 Admin Commands 00:20:08.304 -------------- 00:20:08.304 Get Log Page (02h): Supported 00:20:08.304 Identify (06h): Supported 00:20:08.304 Abort (08h): Supported 00:20:08.304 Set Features (09h): Supported 00:20:08.304 Get Features (0Ah): Supported 00:20:08.304 Asynchronous Event Request (0Ch): Supported 00:20:08.304 Keep Alive (18h): Supported 00:20:08.304 I/O Commands 00:20:08.304 ------------ 00:20:08.304 Flush (00h): Supported LBA-Change 00:20:08.304 Write (01h): Supported LBA-Change 00:20:08.304 Read (02h): Supported 00:20:08.304 Compare (05h): Supported 00:20:08.304 Write Zeroes (08h): Supported LBA-Change 00:20:08.304 Dataset Management (09h): Supported LBA-Change 00:20:08.304 Copy (19h): Supported LBA-Change 00:20:08.304 Unknown (79h): Supported LBA-Change 00:20:08.304 Unknown (7Ah): Supported 00:20:08.304 00:20:08.304 Error Log 00:20:08.304 ========= 00:20:08.304 00:20:08.304 Arbitration 00:20:08.304 =========== 00:20:08.304 Arbitration Burst: 1 00:20:08.304 00:20:08.304 Power Management 00:20:08.304 ================ 00:20:08.304 Number of Power States: 1 00:20:08.304 Current Power State: Power State #0 00:20:08.304 Power State #0: 00:20:08.304 Max Power: 0.00 W 00:20:08.304 Non-Operational State: Operational 00:20:08.304 Entry Latency: Not Reported 00:20:08.304 Exit Latency: Not Reported 00:20:08.304 Relative Read Throughput: 0 00:20:08.304 Relative Read Latency: 0 00:20:08.304 Relative Write Throughput: 0 00:20:08.304 Relative Write Latency: 0 00:20:08.304 Idle Power: Not Reported 00:20:08.304 Active Power: Not Reported 00:20:08.304 Non-Operational Permissive Mode: Not Supported 00:20:08.304 00:20:08.304 Health Information 00:20:08.304 ================== 00:20:08.304 Critical Warnings: 00:20:08.304 Available Spare Space: OK 00:20:08.304 Temperature: OK 00:20:08.304 Device Reliability: OK 00:20:08.304 Read Only: No 00:20:08.304 Volatile Memory Backup: OK 00:20:08.304 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:08.304 Temperature Threshold: [2024-04-24 22:11:50.416946] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.304 [2024-04-24 22:11:50.416959] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1638d00) 00:20:08.304 [2024-04-24 22:11:50.416972] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.304 [2024-04-24 22:11:50.416996] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1698860, cid 7, qid 0 00:20:08.304 [2024-04-24 22:11:50.417213] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.304 [2024-04-24 22:11:50.417230] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.304 [2024-04-24 22:11:50.417238] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.304 [2024-04-24 22:11:50.417245] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1698860) on tqpair=0x1638d00 00:20:08.304 [2024-04-24 22:11:50.417292] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:08.304 [2024-04-24 22:11:50.417315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.304 [2024-04-24 22:11:50.417328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.304 [2024-04-24 22:11:50.417338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.304 [2024-04-24 22:11:50.417349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.304 [2024-04-24 22:11:50.417362] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.304 [2024-04-24 22:11:50.417370] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.304 [2024-04-24 22:11:50.417378] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1638d00) 00:20:08.304 [2024-04-24 22:11:50.417389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.304 [2024-04-24 22:11:50.421427] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16982e0, cid 3, qid 0 00:20:08.304 [2024-04-24 22:11:50.421582] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.304 [2024-04-24 22:11:50.421600] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.304 [2024-04-24 22:11:50.421607] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.304 [2024-04-24 22:11:50.421615] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16982e0) on tqpair=0x1638d00 00:20:08.304 [2024-04-24 22:11:50.421628] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.304 [2024-04-24 22:11:50.421637] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.304 [2024-04-24 22:11:50.421644] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1638d00) 00:20:08.304 [2024-04-24 22:11:50.421655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.304 [2024-04-24 22:11:50.421689] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16982e0, cid 3, qid 0 00:20:08.304 [2024-04-24 22:11:50.421874] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.304 [2024-04-24 22:11:50.421887] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.304 [2024-04-24 22:11:50.421895] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.304 [2024-04-24 22:11:50.421902] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16982e0) on tqpair=0x1638d00 00:20:08.305 [2024-04-24 22:11:50.421912] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:08.305 [2024-04-24 22:11:50.421920] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:08.305 [2024-04-24 22:11:50.421937] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.421947] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.421954] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1638d00) 00:20:08.305 [2024-04-24 22:11:50.421966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.305 [2024-04-24 22:11:50.421988] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16982e0, cid 3, qid 0 00:20:08.305 [2024-04-24 22:11:50.422149] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.305 [2024-04-24 22:11:50.422163] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.305 [2024-04-24 22:11:50.422170] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.422177] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16982e0) on tqpair=0x1638d00 00:20:08.305 [2024-04-24 22:11:50.422196] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.422207] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.422214] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1638d00) 00:20:08.305 [2024-04-24 22:11:50.422225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.305 [2024-04-24 22:11:50.422247] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16982e0, cid 3, qid 0 00:20:08.305 [2024-04-24 22:11:50.422418] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.305 [2024-04-24 22:11:50.422434] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.305 [2024-04-24 22:11:50.422441] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.422449] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16982e0) on tqpair=0x1638d00 00:20:08.305 [2024-04-24 22:11:50.422468] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.422478] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.422485] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1638d00) 00:20:08.305 [2024-04-24 22:11:50.422497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.305 [2024-04-24 22:11:50.422519] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16982e0, cid 3, qid 0 00:20:08.305 [2024-04-24 22:11:50.422681] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.305 [2024-04-24 22:11:50.422698] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.305 [2024-04-24 22:11:50.422705] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.422712] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16982e0) on tqpair=0x1638d00 00:20:08.305 [2024-04-24 22:11:50.422732] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.422742] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.422749] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1638d00) 00:20:08.305 [2024-04-24 22:11:50.422765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.305 [2024-04-24 22:11:50.422789] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16982e0, cid 3, qid 0 00:20:08.305 [2024-04-24 22:11:50.422955] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.305 [2024-04-24 22:11:50.422968] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.305 [2024-04-24 22:11:50.422975] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.422983] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16982e0) on tqpair=0x1638d00 00:20:08.305 [2024-04-24 22:11:50.423002] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.423011] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.423019] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1638d00) 00:20:08.305 [2024-04-24 22:11:50.423030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.305 [2024-04-24 22:11:50.423052] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16982e0, cid 3, qid 0 00:20:08.305 [2024-04-24 22:11:50.423267] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.305 [2024-04-24 22:11:50.423283] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.305 [2024-04-24 22:11:50.423291] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.423298] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16982e0) on tqpair=0x1638d00 00:20:08.305 [2024-04-24 22:11:50.423317] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.423327] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.423334] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1638d00) 00:20:08.305 [2024-04-24 22:11:50.423346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.305 [2024-04-24 22:11:50.423369] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16982e0, cid 3, qid 0 00:20:08.305 [2024-04-24 22:11:50.423533] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.305 [2024-04-24 22:11:50.423547] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.305 [2024-04-24 22:11:50.423555] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.423562] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16982e0) on tqpair=0x1638d00 00:20:08.305 [2024-04-24 22:11:50.423581] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.423591] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.423598] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1638d00) 00:20:08.305 [2024-04-24 22:11:50.423610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.305 [2024-04-24 22:11:50.423633] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16982e0, cid 3, qid 0 00:20:08.305 [2024-04-24 22:11:50.423786] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.305 [2024-04-24 22:11:50.423803] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.305 [2024-04-24 22:11:50.423811] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.423818] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16982e0) on tqpair=0x1638d00 00:20:08.305 [2024-04-24 22:11:50.423837] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.423847] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.423855] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1638d00) 00:20:08.305 [2024-04-24 22:11:50.423866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.305 [2024-04-24 22:11:50.423893] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16982e0, cid 3, qid 0 00:20:08.305 [2024-04-24 22:11:50.424058] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.305 [2024-04-24 22:11:50.424071] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.305 [2024-04-24 22:11:50.424078] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.424086] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16982e0) on tqpair=0x1638d00 00:20:08.305 [2024-04-24 22:11:50.424105] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.424115] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.424122] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1638d00) 00:20:08.305 [2024-04-24 22:11:50.424134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.305 [2024-04-24 22:11:50.424156] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16982e0, cid 3, qid 0 00:20:08.305 [2024-04-24 22:11:50.424319] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.305 [2024-04-24 22:11:50.424332] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.305 [2024-04-24 22:11:50.424339] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.424347] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16982e0) on tqpair=0x1638d00 00:20:08.305 [2024-04-24 22:11:50.424366] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.424376] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.424383] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1638d00) 00:20:08.305 [2024-04-24 22:11:50.424402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.305 [2024-04-24 22:11:50.424427] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16982e0, cid 3, qid 0 00:20:08.305 [2024-04-24 22:11:50.424590] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.305 [2024-04-24 22:11:50.424603] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.305 [2024-04-24 22:11:50.424610] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.424618] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16982e0) on tqpair=0x1638d00 00:20:08.305 [2024-04-24 22:11:50.424636] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.424646] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.424653] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1638d00) 00:20:08.305 [2024-04-24 22:11:50.424665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.305 [2024-04-24 22:11:50.424687] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16982e0, cid 3, qid 0 00:20:08.305 [2024-04-24 22:11:50.424849] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.305 [2024-04-24 22:11:50.424865] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.305 [2024-04-24 22:11:50.424872] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.424880] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16982e0) on tqpair=0x1638d00 00:20:08.305 [2024-04-24 22:11:50.424899] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.424909] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.305 [2024-04-24 22:11:50.424916] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1638d00) 00:20:08.305 [2024-04-24 22:11:50.424928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.305 [2024-04-24 22:11:50.424955] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16982e0, cid 3, qid 0 00:20:08.306 [2024-04-24 22:11:50.425119] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.306 [2024-04-24 22:11:50.425132] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.306 [2024-04-24 22:11:50.425139] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.306 [2024-04-24 22:11:50.425146] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16982e0) on tqpair=0x1638d00 00:20:08.306 [2024-04-24 22:11:50.425165] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.306 [2024-04-24 22:11:50.425175] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.306 [2024-04-24 22:11:50.425182] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1638d00) 00:20:08.306 [2024-04-24 22:11:50.425194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.306 [2024-04-24 22:11:50.425217] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16982e0, cid 3, qid 0 00:20:08.306 [2024-04-24 22:11:50.425382] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.306 [2024-04-24 22:11:50.429406] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.306 [2024-04-24 22:11:50.429419] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.306 [2024-04-24 22:11:50.429427] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16982e0) on tqpair=0x1638d00 00:20:08.306 [2024-04-24 22:11:50.429448] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:08.306 [2024-04-24 22:11:50.429458] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:08.306 [2024-04-24 22:11:50.429465] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1638d00) 00:20:08.306 [2024-04-24 22:11:50.429477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.306 [2024-04-24 22:11:50.429502] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16982e0, cid 3, qid 0 00:20:08.306 [2024-04-24 22:11:50.429667] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:08.306 [2024-04-24 22:11:50.429680] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:08.306 [2024-04-24 22:11:50.429687] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:08.306 [2024-04-24 22:11:50.429694] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16982e0) on tqpair=0x1638d00 00:20:08.306 [2024-04-24 22:11:50.429710] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:20:08.306 0 Kelvin (-273 Celsius) 00:20:08.306 Available Spare: 0% 00:20:08.306 Available Spare Threshold: 0% 00:20:08.306 Life Percentage Used: 0% 00:20:08.306 Data Units Read: 0 00:20:08.306 Data Units Written: 0 00:20:08.306 Host Read Commands: 0 00:20:08.306 Host Write Commands: 0 00:20:08.306 Controller Busy Time: 0 minutes 00:20:08.306 Power Cycles: 0 00:20:08.306 Power On Hours: 0 hours 00:20:08.306 Unsafe Shutdowns: 0 00:20:08.306 Unrecoverable Media Errors: 0 00:20:08.306 Lifetime Error Log Entries: 0 00:20:08.306 Warning Temperature Time: 0 minutes 00:20:08.306 Critical Temperature Time: 0 minutes 00:20:08.306 00:20:08.306 Number of Queues 00:20:08.306 ================ 00:20:08.306 Number of I/O Submission Queues: 127 00:20:08.306 Number of I/O Completion Queues: 127 00:20:08.306 00:20:08.306 Active Namespaces 00:20:08.306 ================= 00:20:08.306 Namespace ID:1 00:20:08.306 Error Recovery Timeout: Unlimited 00:20:08.306 Command Set Identifier: NVM (00h) 00:20:08.306 Deallocate: Supported 00:20:08.306 Deallocated/Unwritten Error: Not Supported 00:20:08.306 Deallocated Read Value: Unknown 00:20:08.306 Deallocate in Write Zeroes: Not Supported 00:20:08.306 Deallocated Guard Field: 0xFFFF 00:20:08.306 Flush: Supported 00:20:08.306 Reservation: Supported 00:20:08.306 Namespace Sharing Capabilities: Multiple Controllers 00:20:08.306 Size (in LBAs): 131072 (0GiB) 00:20:08.306 Capacity (in LBAs): 131072 (0GiB) 00:20:08.306 Utilization (in LBAs): 131072 (0GiB) 00:20:08.306 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:08.306 EUI64: ABCDEF0123456789 00:20:08.306 UUID: 134aac49-91ec-4900-8d53-8c8ab534a946 00:20:08.306 Thin Provisioning: Not Supported 00:20:08.306 Per-NS Atomic Units: Yes 00:20:08.306 Atomic Boundary Size (Normal): 0 00:20:08.306 Atomic Boundary Size (PFail): 0 00:20:08.306 Atomic Boundary Offset: 0 00:20:08.306 Maximum Single Source Range Length: 65535 00:20:08.306 Maximum Copy Length: 65535 00:20:08.306 Maximum Source Range Count: 1 00:20:08.306 NGUID/EUI64 Never Reused: No 00:20:08.306 Namespace Write Protected: No 00:20:08.306 Number of LBA Formats: 1 00:20:08.306 Current LBA Format: LBA Format #00 00:20:08.306 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:08.306 00:20:08.306 22:11:50 -- host/identify.sh@51 -- # sync 00:20:08.306 22:11:50 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:08.306 22:11:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.306 22:11:50 -- common/autotest_common.sh@10 -- # set +x 00:20:08.306 22:11:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.306 22:11:50 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:08.306 22:11:50 -- host/identify.sh@56 -- # nvmftestfini 00:20:08.306 22:11:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:08.306 22:11:50 -- nvmf/common.sh@117 -- # sync 00:20:08.306 22:11:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:08.306 22:11:50 -- nvmf/common.sh@120 -- # set +e 00:20:08.306 22:11:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:08.306 22:11:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:08.306 rmmod nvme_tcp 00:20:08.306 rmmod nvme_fabrics 00:20:08.306 rmmod nvme_keyring 00:20:08.306 22:11:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:08.306 22:11:50 -- nvmf/common.sh@124 -- # set -e 00:20:08.306 22:11:50 -- nvmf/common.sh@125 -- # return 0 00:20:08.306 22:11:50 -- nvmf/common.sh@478 -- # '[' -n 3992057 ']' 00:20:08.306 22:11:50 -- nvmf/common.sh@479 -- # killprocess 3992057 00:20:08.306 22:11:50 -- common/autotest_common.sh@936 -- # '[' -z 3992057 ']' 00:20:08.306 22:11:50 -- common/autotest_common.sh@940 -- # kill -0 3992057 00:20:08.306 22:11:50 -- common/autotest_common.sh@941 -- # uname 00:20:08.306 22:11:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:08.306 22:11:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3992057 00:20:08.565 22:11:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:08.565 22:11:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:08.565 22:11:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3992057' 00:20:08.565 killing process with pid 3992057 00:20:08.565 22:11:50 -- common/autotest_common.sh@955 -- # kill 3992057 00:20:08.565 [2024-04-24 22:11:50.561943] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:08.565 [2024-04-24 22:11:50.561982] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:08.565 22:11:50 -- common/autotest_common.sh@960 -- # wait 3992057 00:20:08.823 22:11:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:08.823 22:11:50 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:08.823 22:11:50 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:08.823 22:11:50 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:08.823 22:11:50 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:08.823 22:11:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.823 22:11:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.823 22:11:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.724 22:11:52 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:10.724 00:20:10.724 real 0m5.905s 00:20:10.724 user 0m4.792s 00:20:10.724 sys 0m2.243s 00:20:10.724 22:11:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:10.724 22:11:52 -- common/autotest_common.sh@10 -- # set +x 00:20:10.724 ************************************ 00:20:10.724 END TEST nvmf_identify 00:20:10.724 ************************************ 00:20:10.724 22:11:52 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:10.724 22:11:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:10.724 22:11:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:10.724 22:11:52 -- common/autotest_common.sh@10 -- # set +x 00:20:10.983 ************************************ 00:20:10.983 START TEST nvmf_perf 00:20:10.983 ************************************ 00:20:10.983 22:11:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:10.983 * Looking for test storage... 00:20:10.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:10.983 22:11:53 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:10.983 22:11:53 -- nvmf/common.sh@7 -- # uname -s 00:20:10.983 22:11:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:10.983 22:11:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:10.983 22:11:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:10.983 22:11:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:10.983 22:11:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:10.983 22:11:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:10.983 22:11:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:10.983 22:11:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:10.983 22:11:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:10.983 22:11:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:10.983 22:11:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:10.983 22:11:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:10.983 22:11:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:10.983 22:11:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:10.983 22:11:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:10.983 22:11:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:10.983 22:11:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:10.983 22:11:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:10.983 22:11:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:10.983 22:11:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:10.983 22:11:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.983 22:11:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.983 22:11:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.983 22:11:53 -- paths/export.sh@5 -- # export PATH 00:20:10.983 22:11:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.983 22:11:53 -- nvmf/common.sh@47 -- # : 0 00:20:10.983 22:11:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:10.983 22:11:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:10.983 22:11:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:10.984 22:11:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:10.984 22:11:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:10.984 22:11:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:10.984 22:11:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:10.984 22:11:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:10.984 22:11:53 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:10.984 22:11:53 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:10.984 22:11:53 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:10.984 22:11:53 -- host/perf.sh@17 -- # nvmftestinit 00:20:10.984 22:11:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:10.984 22:11:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:10.984 22:11:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:10.984 22:11:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:10.984 22:11:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:10.984 22:11:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.984 22:11:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.984 22:11:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.984 22:11:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:10.984 22:11:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:10.984 22:11:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:10.984 22:11:53 -- common/autotest_common.sh@10 -- # set +x 00:20:13.511 22:11:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:13.511 22:11:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:13.511 22:11:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:13.511 22:11:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:13.511 22:11:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:13.511 22:11:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:13.511 22:11:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:13.511 22:11:55 -- nvmf/common.sh@295 -- # net_devs=() 00:20:13.511 22:11:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:13.511 22:11:55 -- nvmf/common.sh@296 -- # e810=() 00:20:13.511 22:11:55 -- nvmf/common.sh@296 -- # local -ga e810 00:20:13.511 22:11:55 -- nvmf/common.sh@297 -- # x722=() 00:20:13.511 22:11:55 -- nvmf/common.sh@297 -- # local -ga x722 00:20:13.511 22:11:55 -- nvmf/common.sh@298 -- # mlx=() 00:20:13.511 22:11:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:13.511 22:11:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:13.511 22:11:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:13.511 22:11:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:13.511 22:11:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:13.511 22:11:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:13.511 22:11:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:13.511 22:11:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:13.511 22:11:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:13.511 22:11:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:13.511 22:11:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:13.511 22:11:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:13.511 22:11:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:13.511 22:11:55 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:13.511 22:11:55 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:13.511 22:11:55 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:13.511 22:11:55 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:13.511 22:11:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:13.511 22:11:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:13.511 22:11:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:13.511 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:13.511 22:11:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:13.511 22:11:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:13.511 22:11:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.511 22:11:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.511 22:11:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:13.511 22:11:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:13.511 22:11:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:13.511 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:13.511 22:11:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:13.511 22:11:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:13.511 22:11:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.511 22:11:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.511 22:11:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:13.511 22:11:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:13.511 22:11:55 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:13.511 22:11:55 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:13.511 22:11:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:13.511 22:11:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.511 22:11:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:13.511 22:11:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.511 22:11:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:13.511 Found net devices under 0000:84:00.0: cvl_0_0 00:20:13.511 22:11:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.511 22:11:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:13.511 22:11:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.511 22:11:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:13.511 22:11:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.511 22:11:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:13.511 Found net devices under 0000:84:00.1: cvl_0_1 00:20:13.511 22:11:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.511 22:11:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:13.511 22:11:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:13.511 22:11:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:13.511 22:11:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:13.511 22:11:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:13.511 22:11:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:13.511 22:11:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:13.511 22:11:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:13.511 22:11:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:13.511 22:11:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:13.511 22:11:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:13.511 22:11:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:13.511 22:11:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:13.511 22:11:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:13.511 22:11:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:13.511 22:11:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:13.511 22:11:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:13.511 22:11:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:13.511 22:11:55 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:13.511 22:11:55 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:13.511 22:11:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:13.511 22:11:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:13.511 22:11:55 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:13.511 22:11:55 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:13.511 22:11:55 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:13.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:20:13.511 00:20:13.511 --- 10.0.0.2 ping statistics --- 00:20:13.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.511 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:20:13.511 22:11:55 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:13.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:20:13.511 00:20:13.511 --- 10.0.0.1 ping statistics --- 00:20:13.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.511 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:20:13.511 22:11:55 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:13.511 22:11:55 -- nvmf/common.sh@411 -- # return 0 00:20:13.511 22:11:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:13.511 22:11:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:13.511 22:11:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:13.511 22:11:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:13.511 22:11:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:13.511 22:11:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:13.511 22:11:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:13.511 22:11:55 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:13.511 22:11:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:13.511 22:11:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:13.511 22:11:55 -- common/autotest_common.sh@10 -- # set +x 00:20:13.511 22:11:55 -- nvmf/common.sh@470 -- # nvmfpid=3994165 00:20:13.511 22:11:55 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:13.512 22:11:55 -- nvmf/common.sh@471 -- # waitforlisten 3994165 00:20:13.512 22:11:55 -- common/autotest_common.sh@817 -- # '[' -z 3994165 ']' 00:20:13.512 22:11:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.512 22:11:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:13.512 22:11:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.512 22:11:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:13.512 22:11:55 -- common/autotest_common.sh@10 -- # set +x 00:20:13.512 [2024-04-24 22:11:55.665948] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:20:13.512 [2024-04-24 22:11:55.666051] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.512 EAL: No free 2048 kB hugepages reported on node 1 00:20:13.512 [2024-04-24 22:11:55.753917] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:13.769 [2024-04-24 22:11:55.876956] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.769 [2024-04-24 22:11:55.877034] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.769 [2024-04-24 22:11:55.877053] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.769 [2024-04-24 22:11:55.877068] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.769 [2024-04-24 22:11:55.877080] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.769 [2024-04-24 22:11:55.877188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.769 [2024-04-24 22:11:55.877267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.769 [2024-04-24 22:11:55.877322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:13.769 [2024-04-24 22:11:55.877325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.027 22:11:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:14.027 22:11:56 -- common/autotest_common.sh@850 -- # return 0 00:20:14.027 22:11:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:14.027 22:11:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:14.028 22:11:56 -- common/autotest_common.sh@10 -- # set +x 00:20:14.028 22:11:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.028 22:11:56 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:14.028 22:11:56 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:17.309 22:11:59 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:17.309 22:11:59 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:17.566 22:11:59 -- host/perf.sh@30 -- # local_nvme_trid=0000:82:00.0 00:20:17.566 22:11:59 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:18.133 22:12:00 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:18.133 22:12:00 -- host/perf.sh@33 -- # '[' -n 0000:82:00.0 ']' 00:20:18.133 22:12:00 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:18.133 22:12:00 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:18.133 22:12:00 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:18.133 [2024-04-24 22:12:00.351669] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.133 22:12:00 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:18.699 22:12:00 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:18.699 22:12:00 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:18.957 22:12:01 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:18.957 22:12:01 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:19.522 22:12:01 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:19.780 [2024-04-24 22:12:01.889028] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:19.780 [2024-04-24 22:12:01.889435] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.780 22:12:01 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:20.345 22:12:02 -- host/perf.sh@52 -- # '[' -n 0000:82:00.0 ']' 00:20:20.345 22:12:02 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:20:20.345 22:12:02 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:20.345 22:12:02 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:20:21.717 Initializing NVMe Controllers 00:20:21.717 Attached to NVMe Controller at 0000:82:00.0 [8086:0a54] 00:20:21.717 Associating PCIE (0000:82:00.0) NSID 1 with lcore 0 00:20:21.717 Initialization complete. Launching workers. 00:20:21.717 ======================================================== 00:20:21.717 Latency(us) 00:20:21.717 Device Information : IOPS MiB/s Average min max 00:20:21.717 PCIE (0000:82:00.0) NSID 1 from core 0: 73539.34 287.26 434.54 51.97 5318.48 00:20:21.717 ======================================================== 00:20:21.717 Total : 73539.34 287.26 434.54 51.97 5318.48 00:20:21.717 00:20:21.717 22:12:03 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:21.717 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.091 Initializing NVMe Controllers 00:20:23.091 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:23.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:23.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:23.091 Initialization complete. Launching workers. 00:20:23.091 ======================================================== 00:20:23.091 Latency(us) 00:20:23.091 Device Information : IOPS MiB/s Average min max 00:20:23.091 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 116.00 0.45 8865.62 196.79 45880.50 00:20:23.091 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 62.00 0.24 16330.04 7928.99 47886.82 00:20:23.091 ======================================================== 00:20:23.091 Total : 178.00 0.70 11465.59 196.79 47886.82 00:20:23.091 00:20:23.091 22:12:05 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:23.091 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.025 Initializing NVMe Controllers 00:20:24.025 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:24.025 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:24.025 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:24.025 Initialization complete. Launching workers. 00:20:24.025 ======================================================== 00:20:24.025 Latency(us) 00:20:24.025 Device Information : IOPS MiB/s Average min max 00:20:24.025 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7736.97 30.22 4148.42 534.75 7832.31 00:20:24.025 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3883.98 15.17 8276.78 6196.49 15944.13 00:20:24.025 ======================================================== 00:20:24.025 Total : 11620.95 45.39 5528.21 534.75 15944.13 00:20:24.025 00:20:24.025 22:12:06 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:20:24.025 22:12:06 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:20:24.025 22:12:06 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:24.284 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.840 Initializing NVMe Controllers 00:20:26.840 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:26.840 Controller IO queue size 128, less than required. 00:20:26.840 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:26.840 Controller IO queue size 128, less than required. 00:20:26.840 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:26.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:26.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:26.840 Initialization complete. Launching workers. 00:20:26.840 ======================================================== 00:20:26.840 Latency(us) 00:20:26.840 Device Information : IOPS MiB/s Average min max 00:20:26.840 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1196.81 299.20 109159.05 79259.29 148348.08 00:20:26.840 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 572.41 143.10 231729.00 96285.31 379306.58 00:20:26.840 ======================================================== 00:20:26.840 Total : 1769.22 442.30 148815.05 79259.29 379306.58 00:20:26.840 00:20:26.840 22:12:08 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:26.840 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.097 No valid NVMe controllers or AIO or URING devices found 00:20:27.097 Initializing NVMe Controllers 00:20:27.097 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:27.097 Controller IO queue size 128, less than required. 00:20:27.097 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:27.097 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:27.097 Controller IO queue size 128, less than required. 00:20:27.097 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:27.097 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:27.097 WARNING: Some requested NVMe devices were skipped 00:20:27.097 22:12:09 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:27.097 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.624 Initializing NVMe Controllers 00:20:29.625 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:29.625 Controller IO queue size 128, less than required. 00:20:29.625 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:29.625 Controller IO queue size 128, less than required. 00:20:29.625 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:29.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:29.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:29.625 Initialization complete. Launching workers. 00:20:29.625 00:20:29.625 ==================== 00:20:29.625 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:29.625 TCP transport: 00:20:29.625 polls: 14113 00:20:29.625 idle_polls: 7881 00:20:29.625 sock_completions: 6232 00:20:29.625 nvme_completions: 4869 00:20:29.625 submitted_requests: 7294 00:20:29.625 queued_requests: 1 00:20:29.625 00:20:29.625 ==================== 00:20:29.625 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:29.625 TCP transport: 00:20:29.625 polls: 13958 00:20:29.625 idle_polls: 7809 00:20:29.625 sock_completions: 6149 00:20:29.625 nvme_completions: 4487 00:20:29.625 submitted_requests: 6798 00:20:29.625 queued_requests: 1 00:20:29.625 ======================================================== 00:20:29.625 Latency(us) 00:20:29.625 Device Information : IOPS MiB/s Average min max 00:20:29.625 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1215.11 303.78 108069.79 58170.70 193066.13 00:20:29.625 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1119.76 279.94 116054.08 55608.34 183424.35 00:20:29.625 ======================================================== 00:20:29.625 Total : 2334.88 583.72 111898.90 55608.34 193066.13 00:20:29.625 00:20:29.625 22:12:11 -- host/perf.sh@66 -- # sync 00:20:29.625 22:12:11 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:30.196 22:12:12 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:30.196 22:12:12 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:30.196 22:12:12 -- host/perf.sh@114 -- # nvmftestfini 00:20:30.196 22:12:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:30.196 22:12:12 -- nvmf/common.sh@117 -- # sync 00:20:30.196 22:12:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:30.196 22:12:12 -- nvmf/common.sh@120 -- # set +e 00:20:30.196 22:12:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:30.196 22:12:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:30.196 rmmod nvme_tcp 00:20:30.196 rmmod nvme_fabrics 00:20:30.196 rmmod nvme_keyring 00:20:30.196 22:12:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:30.196 22:12:12 -- nvmf/common.sh@124 -- # set -e 00:20:30.196 22:12:12 -- nvmf/common.sh@125 -- # return 0 00:20:30.196 22:12:12 -- nvmf/common.sh@478 -- # '[' -n 3994165 ']' 00:20:30.196 22:12:12 -- nvmf/common.sh@479 -- # killprocess 3994165 00:20:30.196 22:12:12 -- common/autotest_common.sh@936 -- # '[' -z 3994165 ']' 00:20:30.196 22:12:12 -- common/autotest_common.sh@940 -- # kill -0 3994165 00:20:30.196 22:12:12 -- common/autotest_common.sh@941 -- # uname 00:20:30.196 22:12:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:30.196 22:12:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3994165 00:20:30.196 22:12:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:30.196 22:12:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:30.196 22:12:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3994165' 00:20:30.196 killing process with pid 3994165 00:20:30.196 22:12:12 -- common/autotest_common.sh@955 -- # kill 3994165 00:20:30.196 [2024-04-24 22:12:12.302940] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:30.196 22:12:12 -- common/autotest_common.sh@960 -- # wait 3994165 00:20:32.093 22:12:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:32.093 22:12:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:32.093 22:12:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:32.093 22:12:13 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:32.093 22:12:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:32.093 22:12:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.093 22:12:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:32.093 22:12:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.990 22:12:16 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:33.990 00:20:33.990 real 0m22.948s 00:20:33.990 user 1m12.644s 00:20:33.990 sys 0m5.817s 00:20:33.990 22:12:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:33.990 22:12:16 -- common/autotest_common.sh@10 -- # set +x 00:20:33.990 ************************************ 00:20:33.990 END TEST nvmf_perf 00:20:33.990 ************************************ 00:20:33.990 22:12:16 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:33.990 22:12:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:33.990 22:12:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:33.990 22:12:16 -- common/autotest_common.sh@10 -- # set +x 00:20:33.990 ************************************ 00:20:33.990 START TEST nvmf_fio_host 00:20:33.990 ************************************ 00:20:33.990 22:12:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:33.990 * Looking for test storage... 00:20:33.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:33.990 22:12:16 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:33.990 22:12:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:33.990 22:12:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:33.990 22:12:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:33.990 22:12:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.990 22:12:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.990 22:12:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.990 22:12:16 -- paths/export.sh@5 -- # export PATH 00:20:33.990 22:12:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.990 22:12:16 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:33.990 22:12:16 -- nvmf/common.sh@7 -- # uname -s 00:20:33.990 22:12:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:33.990 22:12:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:33.990 22:12:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:33.990 22:12:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:33.990 22:12:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:33.990 22:12:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:33.990 22:12:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:33.990 22:12:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:33.990 22:12:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:33.990 22:12:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:33.990 22:12:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:33.990 22:12:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:33.990 22:12:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:33.990 22:12:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:33.990 22:12:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:33.990 22:12:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:33.990 22:12:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:33.990 22:12:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:33.990 22:12:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:33.990 22:12:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:33.990 22:12:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.990 22:12:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.990 22:12:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.990 22:12:16 -- paths/export.sh@5 -- # export PATH 00:20:33.990 22:12:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.990 22:12:16 -- nvmf/common.sh@47 -- # : 0 00:20:33.990 22:12:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:33.990 22:12:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:33.990 22:12:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:33.990 22:12:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:33.990 22:12:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:33.990 22:12:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:33.990 22:12:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:33.990 22:12:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:33.990 22:12:16 -- host/fio.sh@12 -- # nvmftestinit 00:20:33.990 22:12:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:33.990 22:12:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:33.990 22:12:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:33.990 22:12:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:33.990 22:12:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:33.990 22:12:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.990 22:12:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:33.990 22:12:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.990 22:12:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:33.990 22:12:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:33.990 22:12:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:33.990 22:12:16 -- common/autotest_common.sh@10 -- # set +x 00:20:36.521 22:12:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:36.521 22:12:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:36.521 22:12:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:36.521 22:12:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:36.521 22:12:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:36.521 22:12:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:36.521 22:12:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:36.521 22:12:18 -- nvmf/common.sh@295 -- # net_devs=() 00:20:36.521 22:12:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:36.521 22:12:18 -- nvmf/common.sh@296 -- # e810=() 00:20:36.521 22:12:18 -- nvmf/common.sh@296 -- # local -ga e810 00:20:36.521 22:12:18 -- nvmf/common.sh@297 -- # x722=() 00:20:36.521 22:12:18 -- nvmf/common.sh@297 -- # local -ga x722 00:20:36.521 22:12:18 -- nvmf/common.sh@298 -- # mlx=() 00:20:36.521 22:12:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:36.521 22:12:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:36.521 22:12:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:36.521 22:12:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:36.521 22:12:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:36.521 22:12:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:36.521 22:12:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:36.521 22:12:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:36.521 22:12:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:36.521 22:12:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:36.521 22:12:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:36.521 22:12:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:36.521 22:12:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:36.521 22:12:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:36.521 22:12:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:36.521 22:12:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:36.521 22:12:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:36.521 22:12:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:36.521 22:12:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:36.521 22:12:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:36.521 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:36.521 22:12:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:36.521 22:12:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:36.521 22:12:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.521 22:12:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.521 22:12:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:36.521 22:12:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:36.521 22:12:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:36.521 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:36.521 22:12:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:36.521 22:12:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:36.521 22:12:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.521 22:12:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.521 22:12:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:36.521 22:12:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:36.521 22:12:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:36.521 22:12:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:36.521 22:12:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:36.521 22:12:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.521 22:12:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:36.521 22:12:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.521 22:12:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:36.521 Found net devices under 0000:84:00.0: cvl_0_0 00:20:36.521 22:12:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.521 22:12:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:36.521 22:12:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.521 22:12:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:36.521 22:12:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.521 22:12:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:36.521 Found net devices under 0000:84:00.1: cvl_0_1 00:20:36.521 22:12:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.521 22:12:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:36.521 22:12:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:36.521 22:12:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:36.521 22:12:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:36.521 22:12:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:36.521 22:12:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:36.521 22:12:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:36.521 22:12:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:36.521 22:12:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:36.521 22:12:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:36.521 22:12:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:36.521 22:12:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:36.521 22:12:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:36.521 22:12:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:36.522 22:12:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:36.522 22:12:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:36.522 22:12:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:36.522 22:12:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:36.522 22:12:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:36.522 22:12:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:36.522 22:12:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:36.522 22:12:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:36.522 22:12:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:36.522 22:12:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:36.522 22:12:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:36.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:36.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:20:36.522 00:20:36.522 --- 10.0.0.2 ping statistics --- 00:20:36.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.522 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:20:36.522 22:12:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:36.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:36.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:20:36.522 00:20:36.522 --- 10.0.0.1 ping statistics --- 00:20:36.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.522 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:20:36.522 22:12:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:36.522 22:12:18 -- nvmf/common.sh@411 -- # return 0 00:20:36.522 22:12:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:36.522 22:12:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:36.522 22:12:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:36.522 22:12:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:36.522 22:12:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:36.522 22:12:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:36.522 22:12:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:36.522 22:12:18 -- host/fio.sh@14 -- # [[ y != y ]] 00:20:36.522 22:12:18 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:20:36.522 22:12:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:36.522 22:12:18 -- common/autotest_common.sh@10 -- # set +x 00:20:36.522 22:12:18 -- host/fio.sh@22 -- # nvmfpid=3998905 00:20:36.522 22:12:18 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:36.522 22:12:18 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:36.522 22:12:18 -- host/fio.sh@26 -- # waitforlisten 3998905 00:20:36.522 22:12:18 -- common/autotest_common.sh@817 -- # '[' -z 3998905 ']' 00:20:36.522 22:12:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.522 22:12:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:36.522 22:12:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.522 22:12:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:36.522 22:12:18 -- common/autotest_common.sh@10 -- # set +x 00:20:36.780 [2024-04-24 22:12:18.790974] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:20:36.780 [2024-04-24 22:12:18.791074] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.780 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.780 [2024-04-24 22:12:18.873409] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:36.780 [2024-04-24 22:12:18.997487] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.780 [2024-04-24 22:12:18.997549] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.780 [2024-04-24 22:12:18.997565] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.780 [2024-04-24 22:12:18.997579] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.780 [2024-04-24 22:12:18.997591] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.780 [2024-04-24 22:12:18.997657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.780 [2024-04-24 22:12:18.997718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.780 [2024-04-24 22:12:18.997772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:36.780 [2024-04-24 22:12:18.997776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.038 22:12:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:37.038 22:12:19 -- common/autotest_common.sh@850 -- # return 0 00:20:37.038 22:12:19 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:37.038 22:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.038 22:12:19 -- common/autotest_common.sh@10 -- # set +x 00:20:37.038 [2024-04-24 22:12:19.141298] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.038 22:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.038 22:12:19 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:20:37.038 22:12:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:37.038 22:12:19 -- common/autotest_common.sh@10 -- # set +x 00:20:37.038 22:12:19 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:37.038 22:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.038 22:12:19 -- common/autotest_common.sh@10 -- # set +x 00:20:37.038 Malloc1 00:20:37.038 22:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.038 22:12:19 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:37.038 22:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.038 22:12:19 -- common/autotest_common.sh@10 -- # set +x 00:20:37.038 22:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.038 22:12:19 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:37.038 22:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.038 22:12:19 -- common/autotest_common.sh@10 -- # set +x 00:20:37.038 22:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.038 22:12:19 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:37.038 22:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.038 22:12:19 -- common/autotest_common.sh@10 -- # set +x 00:20:37.038 [2024-04-24 22:12:19.215518] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:37.038 [2024-04-24 22:12:19.215844] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.038 22:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.038 22:12:19 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:37.038 22:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.038 22:12:19 -- common/autotest_common.sh@10 -- # set +x 00:20:37.038 22:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.038 22:12:19 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:20:37.038 22:12:19 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:37.038 22:12:19 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:37.038 22:12:19 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:20:37.038 22:12:19 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:37.038 22:12:19 -- common/autotest_common.sh@1325 -- # local sanitizers 00:20:37.038 22:12:19 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:37.038 22:12:19 -- common/autotest_common.sh@1327 -- # shift 00:20:37.038 22:12:19 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:20:37.038 22:12:19 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:20:37.038 22:12:19 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:37.038 22:12:19 -- common/autotest_common.sh@1331 -- # grep libasan 00:20:37.038 22:12:19 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:20:37.038 22:12:19 -- common/autotest_common.sh@1331 -- # asan_lib= 00:20:37.038 22:12:19 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:20:37.038 22:12:19 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:20:37.038 22:12:19 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:37.038 22:12:19 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:20:37.038 22:12:19 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:20:37.038 22:12:19 -- common/autotest_common.sh@1331 -- # asan_lib= 00:20:37.038 22:12:19 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:20:37.038 22:12:19 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:37.038 22:12:19 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:37.295 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:37.295 fio-3.35 00:20:37.295 Starting 1 thread 00:20:37.295 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.816 00:20:39.817 test: (groupid=0, jobs=1): err= 0: pid=3999125: Wed Apr 24 22:12:21 2024 00:20:39.817 read: IOPS=8003, BW=31.3MiB/s (32.8MB/s)(62.8MiB/2007msec) 00:20:39.817 slat (usec): min=2, max=141, avg= 3.00, stdev= 1.53 00:20:39.817 clat (usec): min=2590, max=15232, avg=8826.38, stdev=830.33 00:20:39.817 lat (usec): min=2613, max=15235, avg=8829.38, stdev=830.25 00:20:39.817 clat percentiles (usec): 00:20:39.817 | 1.00th=[ 7242], 5.00th=[ 7701], 10.00th=[ 7963], 20.00th=[ 8225], 00:20:39.817 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 8979], 00:20:39.817 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[ 9634], 95.00th=[ 9896], 00:20:39.817 | 99.00th=[12125], 99.50th=[12649], 99.90th=[13435], 99.95th=[14091], 00:20:39.817 | 99.99th=[14877] 00:20:39.817 bw ( KiB/s): min=29820, max=32832, per=99.83%, avg=31961.00, stdev=1433.63, samples=4 00:20:39.817 iops : min= 7455, max= 8208, avg=7990.25, stdev=358.41, samples=4 00:20:39.817 write: IOPS=7972, BW=31.1MiB/s (32.7MB/s)(62.5MiB/2007msec); 0 zone resets 00:20:39.817 slat (usec): min=2, max=124, avg= 3.18, stdev= 1.19 00:20:39.817 clat (usec): min=1195, max=14159, avg=7149.20, stdev=726.91 00:20:39.817 lat (usec): min=1202, max=14162, avg=7152.38, stdev=726.86 00:20:39.817 clat percentiles (usec): 00:20:39.817 | 1.00th=[ 5735], 5.00th=[ 6259], 10.00th=[ 6456], 20.00th=[ 6652], 00:20:39.817 | 30.00th=[ 6849], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7242], 00:20:39.817 | 70.00th=[ 7373], 80.00th=[ 7570], 90.00th=[ 7832], 95.00th=[ 8094], 00:20:39.817 | 99.00th=[10028], 99.50th=[10421], 99.90th=[11994], 99.95th=[13042], 00:20:39.817 | 99.99th=[14091] 00:20:39.817 bw ( KiB/s): min=30994, max=32256, per=99.98%, avg=31884.50, stdev=596.15, samples=4 00:20:39.817 iops : min= 7748, max= 8064, avg=7971.00, stdev=149.29, samples=4 00:20:39.817 lat (msec) : 2=0.03%, 4=0.09%, 10=97.10%, 20=2.78% 00:20:39.817 cpu : usr=64.56%, sys=32.35%, ctx=59, majf=0, minf=5 00:20:39.817 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:39.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:39.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:39.817 issued rwts: total=16064,16001,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:39.817 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:39.817 00:20:39.817 Run status group 0 (all jobs): 00:20:39.817 READ: bw=31.3MiB/s (32.8MB/s), 31.3MiB/s-31.3MiB/s (32.8MB/s-32.8MB/s), io=62.8MiB (65.8MB), run=2007-2007msec 00:20:39.817 WRITE: bw=31.1MiB/s (32.7MB/s), 31.1MiB/s-31.1MiB/s (32.7MB/s-32.7MB/s), io=62.5MiB (65.5MB), run=2007-2007msec 00:20:39.817 22:12:21 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:39.817 22:12:21 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:39.817 22:12:21 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:20:39.817 22:12:21 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:39.817 22:12:21 -- common/autotest_common.sh@1325 -- # local sanitizers 00:20:39.817 22:12:21 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:39.817 22:12:21 -- common/autotest_common.sh@1327 -- # shift 00:20:39.817 22:12:21 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:20:39.817 22:12:21 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:20:39.817 22:12:21 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:39.817 22:12:21 -- common/autotest_common.sh@1331 -- # grep libasan 00:20:39.817 22:12:21 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:20:39.817 22:12:21 -- common/autotest_common.sh@1331 -- # asan_lib= 00:20:39.817 22:12:21 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:20:39.817 22:12:21 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:20:39.817 22:12:21 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:39.817 22:12:21 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:20:39.817 22:12:21 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:20:39.817 22:12:21 -- common/autotest_common.sh@1331 -- # asan_lib= 00:20:39.817 22:12:21 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:20:39.817 22:12:21 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:39.817 22:12:21 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:40.074 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:40.074 fio-3.35 00:20:40.074 Starting 1 thread 00:20:40.074 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.606 00:20:42.606 test: (groupid=0, jobs=1): err= 0: pid=3999454: Wed Apr 24 22:12:24 2024 00:20:42.606 read: IOPS=6807, BW=106MiB/s (112MB/s)(214MiB/2009msec) 00:20:42.606 slat (usec): min=2, max=362, avg= 4.68, stdev= 6.90 00:20:42.606 clat (usec): min=2884, max=52430, avg=11125.79, stdev=5330.77 00:20:42.606 lat (usec): min=2889, max=52434, avg=11130.46, stdev=5331.63 00:20:42.606 clat percentiles (usec): 00:20:42.606 | 1.00th=[ 5276], 5.00th=[ 6390], 10.00th=[ 7111], 20.00th=[ 8160], 00:20:42.606 | 30.00th=[ 8848], 40.00th=[ 9503], 50.00th=[10159], 60.00th=[10683], 00:20:42.606 | 70.00th=[11469], 80.00th=[12518], 90.00th=[14877], 95.00th=[18744], 00:20:42.606 | 99.00th=[34341], 99.50th=[47973], 99.90th=[51643], 99.95th=[52167], 00:20:42.606 | 99.99th=[52167] 00:20:42.606 bw ( KiB/s): min=48832, max=65760, per=50.87%, avg=55408.00, stdev=7490.03, samples=4 00:20:42.606 iops : min= 3052, max= 4110, avg=3463.00, stdev=468.13, samples=4 00:20:42.606 write: IOPS=4033, BW=63.0MiB/s (66.1MB/s)(113MiB/1794msec); 0 zone resets 00:20:42.606 slat (usec): min=30, max=394, avg=43.38, stdev=20.44 00:20:42.606 clat (usec): min=4082, max=40785, avg=13549.77, stdev=4204.81 00:20:42.606 lat (usec): min=4125, max=40831, avg=13593.16, stdev=4214.53 00:20:42.606 clat percentiles (usec): 00:20:42.606 | 1.00th=[ 8455], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10814], 00:20:42.606 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12518], 60.00th=[13304], 00:20:42.606 | 70.00th=[14091], 80.00th=[15139], 90.00th=[17433], 95.00th=[21365], 00:20:42.606 | 99.00th=[32375], 99.50th=[33817], 99.90th=[35914], 99.95th=[36439], 00:20:42.606 | 99.99th=[40633] 00:20:42.606 bw ( KiB/s): min=51200, max=69088, per=89.28%, avg=57616.00, stdev=8147.73, samples=4 00:20:42.606 iops : min= 3200, max= 4318, avg=3601.00, stdev=509.23, samples=4 00:20:42.606 lat (msec) : 4=0.08%, 10=34.99%, 20=60.11%, 50=4.64%, 100=0.18% 00:20:42.606 cpu : usr=72.96%, sys=18.92%, ctx=66, majf=0, minf=1 00:20:42.606 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:20:42.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:42.606 issued rwts: total=13677,7236,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.606 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:42.606 00:20:42.606 Run status group 0 (all jobs): 00:20:42.606 READ: bw=106MiB/s (112MB/s), 106MiB/s-106MiB/s (112MB/s-112MB/s), io=214MiB (224MB), run=2009-2009msec 00:20:42.606 WRITE: bw=63.0MiB/s (66.1MB/s), 63.0MiB/s-63.0MiB/s (66.1MB/s-66.1MB/s), io=113MiB (119MB), run=1794-1794msec 00:20:42.606 22:12:24 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:42.606 22:12:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.606 22:12:24 -- common/autotest_common.sh@10 -- # set +x 00:20:42.606 22:12:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.606 22:12:24 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:20:42.606 22:12:24 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:20:42.606 22:12:24 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:20:42.606 22:12:24 -- host/fio.sh@84 -- # nvmftestfini 00:20:42.606 22:12:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:42.606 22:12:24 -- nvmf/common.sh@117 -- # sync 00:20:42.606 22:12:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:42.606 22:12:24 -- nvmf/common.sh@120 -- # set +e 00:20:42.606 22:12:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:42.606 22:12:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:42.606 rmmod nvme_tcp 00:20:42.606 rmmod nvme_fabrics 00:20:42.606 rmmod nvme_keyring 00:20:42.606 22:12:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:42.606 22:12:24 -- nvmf/common.sh@124 -- # set -e 00:20:42.606 22:12:24 -- nvmf/common.sh@125 -- # return 0 00:20:42.606 22:12:24 -- nvmf/common.sh@478 -- # '[' -n 3998905 ']' 00:20:42.606 22:12:24 -- nvmf/common.sh@479 -- # killprocess 3998905 00:20:42.606 22:12:24 -- common/autotest_common.sh@936 -- # '[' -z 3998905 ']' 00:20:42.606 22:12:24 -- common/autotest_common.sh@940 -- # kill -0 3998905 00:20:42.606 22:12:24 -- common/autotest_common.sh@941 -- # uname 00:20:42.606 22:12:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:42.606 22:12:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3998905 00:20:42.607 22:12:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:42.607 22:12:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:42.607 22:12:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3998905' 00:20:42.607 killing process with pid 3998905 00:20:42.607 22:12:24 -- common/autotest_common.sh@955 -- # kill 3998905 00:20:42.607 [2024-04-24 22:12:24.552636] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:42.607 22:12:24 -- common/autotest_common.sh@960 -- # wait 3998905 00:20:42.607 22:12:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:42.607 22:12:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:42.607 22:12:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:42.607 22:12:24 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:42.607 22:12:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:42.607 22:12:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.607 22:12:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.607 22:12:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.134 22:12:26 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:45.134 00:20:45.134 real 0m10.739s 00:20:45.134 user 0m27.723s 00:20:45.134 sys 0m3.905s 00:20:45.134 22:12:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:45.134 22:12:26 -- common/autotest_common.sh@10 -- # set +x 00:20:45.134 ************************************ 00:20:45.134 END TEST nvmf_fio_host 00:20:45.134 ************************************ 00:20:45.134 22:12:26 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:45.134 22:12:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:45.134 22:12:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:45.134 22:12:26 -- common/autotest_common.sh@10 -- # set +x 00:20:45.134 ************************************ 00:20:45.134 START TEST nvmf_failover 00:20:45.134 ************************************ 00:20:45.134 22:12:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:45.134 * Looking for test storage... 00:20:45.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:45.134 22:12:27 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:45.134 22:12:27 -- nvmf/common.sh@7 -- # uname -s 00:20:45.134 22:12:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:45.134 22:12:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:45.134 22:12:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:45.134 22:12:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:45.134 22:12:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:45.134 22:12:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:45.134 22:12:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:45.134 22:12:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:45.134 22:12:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:45.134 22:12:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:45.134 22:12:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:45.134 22:12:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:45.134 22:12:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:45.134 22:12:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:45.134 22:12:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:45.134 22:12:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:45.134 22:12:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:45.134 22:12:27 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:45.134 22:12:27 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:45.134 22:12:27 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:45.134 22:12:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.134 22:12:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.134 22:12:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.134 22:12:27 -- paths/export.sh@5 -- # export PATH 00:20:45.134 22:12:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.134 22:12:27 -- nvmf/common.sh@47 -- # : 0 00:20:45.134 22:12:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:45.134 22:12:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:45.134 22:12:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:45.134 22:12:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:45.134 22:12:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:45.134 22:12:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:45.134 22:12:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:45.134 22:12:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:45.134 22:12:27 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:45.134 22:12:27 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:45.134 22:12:27 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:45.134 22:12:27 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:45.134 22:12:27 -- host/failover.sh@18 -- # nvmftestinit 00:20:45.134 22:12:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:45.134 22:12:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:45.134 22:12:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:45.134 22:12:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:45.134 22:12:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:45.134 22:12:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.134 22:12:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:45.134 22:12:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.134 22:12:27 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:45.134 22:12:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:45.134 22:12:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:45.134 22:12:27 -- common/autotest_common.sh@10 -- # set +x 00:20:47.659 22:12:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:47.659 22:12:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:47.659 22:12:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:47.659 22:12:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:47.659 22:12:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:47.659 22:12:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:47.659 22:12:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:47.659 22:12:29 -- nvmf/common.sh@295 -- # net_devs=() 00:20:47.659 22:12:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:47.659 22:12:29 -- nvmf/common.sh@296 -- # e810=() 00:20:47.659 22:12:29 -- nvmf/common.sh@296 -- # local -ga e810 00:20:47.659 22:12:29 -- nvmf/common.sh@297 -- # x722=() 00:20:47.659 22:12:29 -- nvmf/common.sh@297 -- # local -ga x722 00:20:47.659 22:12:29 -- nvmf/common.sh@298 -- # mlx=() 00:20:47.659 22:12:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:47.659 22:12:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:47.659 22:12:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:47.659 22:12:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:47.659 22:12:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:47.659 22:12:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:47.659 22:12:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:47.659 22:12:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:47.659 22:12:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:47.659 22:12:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:47.659 22:12:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:47.659 22:12:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:47.659 22:12:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:47.659 22:12:29 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:47.659 22:12:29 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:47.659 22:12:29 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:47.659 22:12:29 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:47.659 22:12:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:47.659 22:12:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:47.659 22:12:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:47.659 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:47.659 22:12:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:47.659 22:12:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:47.659 22:12:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.659 22:12:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.659 22:12:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:47.659 22:12:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:47.659 22:12:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:47.659 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:47.659 22:12:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:47.659 22:12:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:47.659 22:12:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.659 22:12:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.659 22:12:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:47.659 22:12:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:47.659 22:12:29 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:47.659 22:12:29 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:47.659 22:12:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:47.659 22:12:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.659 22:12:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:47.659 22:12:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.659 22:12:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:47.659 Found net devices under 0000:84:00.0: cvl_0_0 00:20:47.659 22:12:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.659 22:12:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:47.659 22:12:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.659 22:12:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:47.659 22:12:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.659 22:12:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:47.659 Found net devices under 0000:84:00.1: cvl_0_1 00:20:47.659 22:12:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.659 22:12:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:47.659 22:12:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:47.659 22:12:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:47.659 22:12:29 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:47.659 22:12:29 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:47.659 22:12:29 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:47.659 22:12:29 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:47.659 22:12:29 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:47.659 22:12:29 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:47.659 22:12:29 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:47.659 22:12:29 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:47.659 22:12:29 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:47.659 22:12:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:47.659 22:12:29 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:47.659 22:12:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:47.659 22:12:29 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:47.659 22:12:29 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:47.659 22:12:29 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:47.659 22:12:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:47.659 22:12:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:47.659 22:12:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:47.659 22:12:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:47.660 22:12:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:47.660 22:12:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:47.660 22:12:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:47.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:47.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:20:47.660 00:20:47.660 --- 10.0.0.2 ping statistics --- 00:20:47.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.660 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:20:47.660 22:12:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:47.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:47.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:20:47.660 00:20:47.660 --- 10.0.0.1 ping statistics --- 00:20:47.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.660 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:20:47.660 22:12:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:47.660 22:12:29 -- nvmf/common.sh@411 -- # return 0 00:20:47.660 22:12:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:47.660 22:12:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:47.660 22:12:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:47.660 22:12:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:47.660 22:12:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:47.660 22:12:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:47.660 22:12:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:47.660 22:12:29 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:47.660 22:12:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:47.660 22:12:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:47.660 22:12:29 -- common/autotest_common.sh@10 -- # set +x 00:20:47.660 22:12:29 -- nvmf/common.sh@470 -- # nvmfpid=4001788 00:20:47.660 22:12:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:47.660 22:12:29 -- nvmf/common.sh@471 -- # waitforlisten 4001788 00:20:47.660 22:12:29 -- common/autotest_common.sh@817 -- # '[' -z 4001788 ']' 00:20:47.660 22:12:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.660 22:12:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:47.660 22:12:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.660 22:12:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:47.660 22:12:29 -- common/autotest_common.sh@10 -- # set +x 00:20:47.660 [2024-04-24 22:12:29.634947] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:20:47.660 [2024-04-24 22:12:29.635039] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.660 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.660 [2024-04-24 22:12:29.714652] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:47.660 [2024-04-24 22:12:29.837907] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.660 [2024-04-24 22:12:29.837979] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.660 [2024-04-24 22:12:29.837995] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.660 [2024-04-24 22:12:29.838009] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.660 [2024-04-24 22:12:29.838021] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.660 [2024-04-24 22:12:29.838087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.660 [2024-04-24 22:12:29.838146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.660 [2024-04-24 22:12:29.838142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:47.917 22:12:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:47.917 22:12:29 -- common/autotest_common.sh@850 -- # return 0 00:20:47.917 22:12:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:47.917 22:12:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:47.917 22:12:29 -- common/autotest_common.sh@10 -- # set +x 00:20:47.917 22:12:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.917 22:12:29 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:48.482 [2024-04-24 22:12:30.475693] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:48.482 22:12:30 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:48.772 Malloc0 00:20:48.772 22:12:30 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:49.030 22:12:31 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:49.596 22:12:31 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:49.853 [2024-04-24 22:12:31.876562] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:49.853 [2024-04-24 22:12:31.876899] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.853 22:12:31 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:50.140 [2024-04-24 22:12:32.209840] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:50.140 22:12:32 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:50.398 [2024-04-24 22:12:32.543076] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:20:50.398 22:12:32 -- host/failover.sh@31 -- # bdevperf_pid=4002094 00:20:50.398 22:12:32 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:20:50.398 22:12:32 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:50.398 22:12:32 -- host/failover.sh@34 -- # waitforlisten 4002094 /var/tmp/bdevperf.sock 00:20:50.398 22:12:32 -- common/autotest_common.sh@817 -- # '[' -z 4002094 ']' 00:20:50.398 22:12:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:50.398 22:12:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:50.398 22:12:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:50.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:50.398 22:12:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:50.398 22:12:32 -- common/autotest_common.sh@10 -- # set +x 00:20:50.964 22:12:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:50.964 22:12:32 -- common/autotest_common.sh@850 -- # return 0 00:20:50.964 22:12:32 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:51.528 NVMe0n1 00:20:51.528 22:12:33 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:51.786 00:20:51.786 22:12:34 -- host/failover.sh@39 -- # run_test_pid=4002343 00:20:51.786 22:12:34 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:51.786 22:12:34 -- host/failover.sh@41 -- # sleep 1 00:20:53.161 22:12:35 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:53.161 [2024-04-24 22:12:35.355094] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355178] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355195] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355210] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355251] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355265] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355278] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355293] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355307] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355349] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355363] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355377] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355391] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355415] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355429] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355456] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355471] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355484] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355498] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355512] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355526] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355539] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355553] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355566] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355580] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355594] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355608] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355622] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355650] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355663] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355677] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355691] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355705] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355718] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355746] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355760] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355773] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355801] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355828] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355863] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355891] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355932] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355959] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355973] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.355987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.356000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.161 [2024-04-24 22:12:35.356013] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.162 [2024-04-24 22:12:35.356027] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.162 [2024-04-24 22:12:35.356040] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.162 [2024-04-24 22:12:35.356054] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.162 [2024-04-24 22:12:35.356067] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.162 [2024-04-24 22:12:35.356081] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.162 [2024-04-24 22:12:35.356096] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.162 [2024-04-24 22:12:35.356110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.162 [2024-04-24 22:12:35.356124] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.162 [2024-04-24 22:12:35.356138] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.162 [2024-04-24 22:12:35.356151] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.162 [2024-04-24 22:12:35.356165] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.162 [2024-04-24 22:12:35.356179] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.162 [2024-04-24 22:12:35.356192] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.162 [2024-04-24 22:12:35.356206] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.162 [2024-04-24 22:12:35.356223] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.162 [2024-04-24 22:12:35.356237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.162 [2024-04-24 22:12:35.356251] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.162 [2024-04-24 22:12:35.356264] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04b30 is same with the state(5) to be set 00:20:53.162 22:12:35 -- host/failover.sh@45 -- # sleep 3 00:20:56.444 22:12:38 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:57.010 00:20:57.010 22:12:38 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:57.268 [2024-04-24 22:12:39.377333] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.268 [2024-04-24 22:12:39.377427] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.268 [2024-04-24 22:12:39.377446] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.268 [2024-04-24 22:12:39.377461] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.268 [2024-04-24 22:12:39.377474] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.268 [2024-04-24 22:12:39.377488] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.268 [2024-04-24 22:12:39.377502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.268 [2024-04-24 22:12:39.377515] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.268 [2024-04-24 22:12:39.377529] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.268 [2024-04-24 22:12:39.377542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.268 [2024-04-24 22:12:39.377555] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.268 [2024-04-24 22:12:39.377576] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.268 [2024-04-24 22:12:39.377589] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.268 [2024-04-24 22:12:39.377602] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.268 [2024-04-24 22:12:39.377616] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.268 [2024-04-24 22:12:39.377629] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.268 [2024-04-24 22:12:39.377643] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.268 [2024-04-24 22:12:39.377656] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.268 [2024-04-24 22:12:39.377670] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.268 [2024-04-24 22:12:39.377683] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.268 [2024-04-24 22:12:39.377707] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.377721] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.377734] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.377747] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.377760] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.377772] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.377786] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.377799] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.377812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.377825] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.377839] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.377852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.377865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.377877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.377890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.377903] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.377916] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.377929] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.377943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.377956] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.377969] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.377982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.377996] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.378009] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.378022] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.378035] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.378048] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.378065] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.378079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.378092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.378105] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.378118] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.378131] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.378144] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.378157] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.378171] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.378184] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.378197] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.378217] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.378231] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.378244] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 [2024-04-24 22:12:39.378258] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa05340 is same with the state(5) to be set 00:20:57.269 22:12:39 -- host/failover.sh@50 -- # sleep 3 00:21:00.559 22:12:42 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:00.816 [2024-04-24 22:12:42.833810] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.816 22:12:42 -- host/failover.sh@55 -- # sleep 1 00:21:01.759 22:12:43 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:02.016 [2024-04-24 22:12:44.148578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ab8e0 is same with the state(5) to be set 00:21:02.016 [2024-04-24 22:12:44.148642] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ab8e0 is same with the state(5) to be set 00:21:02.016 [2024-04-24 22:12:44.148660] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ab8e0 is same with the state(5) to be set 00:21:02.016 [2024-04-24 22:12:44.148674] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ab8e0 is same with the state(5) to be set 00:21:02.016 [2024-04-24 22:12:44.148687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ab8e0 is same with the state(5) to be set 00:21:02.017 [2024-04-24 22:12:44.148701] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ab8e0 is same with the state(5) to be set 00:21:02.017 [2024-04-24 22:12:44.148714] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ab8e0 is same with the state(5) to be set 00:21:02.017 [2024-04-24 22:12:44.148728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ab8e0 is same with the state(5) to be set 00:21:02.017 [2024-04-24 22:12:44.148741] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ab8e0 is same with the state(5) to be set 00:21:02.017 [2024-04-24 22:12:44.148765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ab8e0 is same with the state(5) to be set 00:21:02.017 [2024-04-24 22:12:44.148779] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ab8e0 is same with the state(5) to be set 00:21:02.017 [2024-04-24 22:12:44.148792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ab8e0 is same with the state(5) to be set 00:21:02.017 [2024-04-24 22:12:44.148805] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ab8e0 is same with the state(5) to be set 00:21:02.017 [2024-04-24 22:12:44.148818] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ab8e0 is same with the state(5) to be set 00:21:02.017 [2024-04-24 22:12:44.148831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ab8e0 is same with the state(5) to be set 00:21:02.017 [2024-04-24 22:12:44.148844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ab8e0 is same with the state(5) to be set 00:21:02.017 [2024-04-24 22:12:44.148858] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ab8e0 is same with the state(5) to be set 00:21:02.017 22:12:44 -- host/failover.sh@59 -- # wait 4002343 00:21:07.279 0 00:21:07.279 22:12:49 -- host/failover.sh@61 -- # killprocess 4002094 00:21:07.279 22:12:49 -- common/autotest_common.sh@936 -- # '[' -z 4002094 ']' 00:21:07.279 22:12:49 -- common/autotest_common.sh@940 -- # kill -0 4002094 00:21:07.279 22:12:49 -- common/autotest_common.sh@941 -- # uname 00:21:07.279 22:12:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:07.279 22:12:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4002094 00:21:07.279 22:12:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:07.279 22:12:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:07.279 22:12:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4002094' 00:21:07.279 killing process with pid 4002094 00:21:07.279 22:12:49 -- common/autotest_common.sh@955 -- # kill 4002094 00:21:07.279 22:12:49 -- common/autotest_common.sh@960 -- # wait 4002094 00:21:07.547 22:12:49 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:07.547 [2024-04-24 22:12:32.615544] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:21:07.547 [2024-04-24 22:12:32.615647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4002094 ] 00:21:07.547 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.547 [2024-04-24 22:12:32.691030] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.547 [2024-04-24 22:12:32.812993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.547 Running I/O for 15 seconds... 00:21:07.547 [2024-04-24 22:12:35.356671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.547 [2024-04-24 22:12:35.356728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.547 [2024-04-24 22:12:35.356759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.547 [2024-04-24 22:12:35.356777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.547 [2024-04-24 22:12:35.356796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.547 [2024-04-24 22:12:35.356812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.547 [2024-04-24 22:12:35.356829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.547 [2024-04-24 22:12:35.356845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.547 [2024-04-24 22:12:35.356862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.547 [2024-04-24 22:12:35.356878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.547 [2024-04-24 22:12:35.356895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.547 [2024-04-24 22:12:35.356911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.547 [2024-04-24 22:12:35.356928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.547 [2024-04-24 22:12:35.356943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.547 [2024-04-24 22:12:35.356959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.547 [2024-04-24 22:12:35.356975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.547 [2024-04-24 22:12:35.356999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.357969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.357986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.358000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.358017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.358032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.358049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.358064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.358081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.358096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.358112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.358127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.358143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.358158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.358174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.358189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.358206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.358221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.358237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.358253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.548 [2024-04-24 22:12:35.358277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.548 [2024-04-24 22:12:35.358293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.358310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.358329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.358346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.358361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.358378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.358398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.358417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.358432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.358449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.358464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.358481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.358496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.358512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.358527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.358544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.358560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.358576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.358591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.358608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.358623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.358639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.358655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.358671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.358687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.358704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.358719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.358736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:72848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.358755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.358772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.358788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.358811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.358827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.358843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.358858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.358875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.358890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.358907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.358922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.358939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.358954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.358971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.358985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.359002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.359017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.359033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.359049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.359065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.359080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.359097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.359112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.359129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.359144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.359165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.359181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.359198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.359213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.359230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.359245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.359262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.359277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.359294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.359309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.359326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.549 [2024-04-24 22:12:35.359341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.359358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.549 [2024-04-24 22:12:35.359373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.359389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.359412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.359430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:73000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.359445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.359462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.359477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.359494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.359508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.359525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:73024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.359540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.359557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.549 [2024-04-24 22:12:35.359577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.549 [2024-04-24 22:12:35.359594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.359609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.359625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.359641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.359657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:73056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.359672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.359689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:73064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.359704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.359720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.359735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.359752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:73080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.359767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.359784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.359800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.359816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:73096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.359832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.359848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:73104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.359863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.359880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.359895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.359912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:73120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.359927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.359943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:73128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.359958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.359979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:73136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.359996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.360012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.360027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.360044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:73152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.360059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.360076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.360091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.360107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.360122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.360140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:73176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.360155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.360171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:73184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.360186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.360203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.360218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.360235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:73200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.360250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.360267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:73208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.360282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.360298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:73216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.360314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.360330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:73224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.360345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.360362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:73232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.360380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.360403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:73240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.360420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.360437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:73248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.360452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.360468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.360484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.360500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:73264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.360515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.360531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.360546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.360562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.360577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.360594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:73288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.360609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.360625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:73296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.360640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.360656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.360671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.360687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.360702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.360719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:73320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.360734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.360750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:73328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.360765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.360781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:73336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.360801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.360818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:73344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.360833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.360849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:73352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.550 [2024-04-24 22:12:35.360864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.550 [2024-04-24 22:12:35.360880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c430 is same with the state(5) to be set 00:21:07.550 [2024-04-24 22:12:35.360899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.551 [2024-04-24 22:12:35.360911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.551 [2024-04-24 22:12:35.360924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73360 len:8 PRP1 0x0 PRP2 0x0 00:21:07.551 [2024-04-24 22:12:35.360937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:35.361012] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x100c430 was disconnected and freed. reset controller. 00:21:07.551 [2024-04-24 22:12:35.361033] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:07.551 [2024-04-24 22:12:35.361070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.551 [2024-04-24 22:12:35.361089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:35.361105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.551 [2024-04-24 22:12:35.361120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:35.361152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.551 [2024-04-24 22:12:35.361167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:35.361182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.551 [2024-04-24 22:12:35.361196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:35.361211] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:07.551 [2024-04-24 22:12:35.364839] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:07.551 [2024-04-24 22:12:35.364881] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1016ab0 (9): Bad file descriptor 00:21:07.551 [2024-04-24 22:12:35.554166] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:07.551 [2024-04-24 22:12:39.379796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.551 [2024-04-24 22:12:39.379850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.379885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:108080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.551 [2024-04-24 22:12:39.379910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.379930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.551 [2024-04-24 22:12:39.379946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.379963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:108096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.551 [2024-04-24 22:12:39.379978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.379995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.551 [2024-04-24 22:12:39.380010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.380027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:108112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.551 [2024-04-24 22:12:39.380042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.380059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:108120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.551 [2024-04-24 22:12:39.380074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.380091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:108128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.551 [2024-04-24 22:12:39.380106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.380123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.551 [2024-04-24 22:12:39.380138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.380155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.551 [2024-04-24 22:12:39.380170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.380187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.551 [2024-04-24 22:12:39.380202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.380219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.551 [2024-04-24 22:12:39.380234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.380250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.551 [2024-04-24 22:12:39.380265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.380282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.551 [2024-04-24 22:12:39.380297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.380318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.551 [2024-04-24 22:12:39.380335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.380352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.551 [2024-04-24 22:12:39.380367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.380384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.551 [2024-04-24 22:12:39.380407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.380425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.551 [2024-04-24 22:12:39.380441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.380458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.551 [2024-04-24 22:12:39.380473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.380489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.551 [2024-04-24 22:12:39.380504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.380521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.551 [2024-04-24 22:12:39.380536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.380553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.551 [2024-04-24 22:12:39.380568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.380584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.551 [2024-04-24 22:12:39.380600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.380616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.551 [2024-04-24 22:12:39.380632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.380648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.551 [2024-04-24 22:12:39.380664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.380681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.551 [2024-04-24 22:12:39.380696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.380712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.551 [2024-04-24 22:12:39.380733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.380751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.551 [2024-04-24 22:12:39.380766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.380782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.551 [2024-04-24 22:12:39.380798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.380814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.551 [2024-04-24 22:12:39.380829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.380846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.551 [2024-04-24 22:12:39.380861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.551 [2024-04-24 22:12:39.380878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.380892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.380909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.380924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.380940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:108336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.380955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.380972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:108344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.380986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:108352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.381018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:108360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.381050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:108368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.381082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:108376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.381113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.381149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:108392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.381180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:108400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.381212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.381244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.381275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.381307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:108432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.381338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.381370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.381409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:108456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.381442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.381474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.381506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.381542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:108488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.381575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.381607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:108504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.381638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:108512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.381669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.381700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.381732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:108536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.381763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:108544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.381795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.381827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:108560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.381858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.381890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.552 [2024-04-24 22:12:39.381921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.552 [2024-04-24 22:12:39.381937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.381956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.381973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:108592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.381989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:108600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:108624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:108640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:108672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:108680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:108728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:108736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:108768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:108792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:108800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:108808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:108832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.382971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:108840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.382987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.383003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.383018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.383034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:108856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.383049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.383066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.383081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.383097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.383111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.383128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.383143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.383163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.383179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.383195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.553 [2024-04-24 22:12:39.383210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.553 [2024-04-24 22:12:39.383227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.554 [2024-04-24 22:12:39.383243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.383259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:108912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.554 [2024-04-24 22:12:39.383274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.383291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.554 [2024-04-24 22:12:39.383306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.383323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.554 [2024-04-24 22:12:39.383338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.383354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.554 [2024-04-24 22:12:39.383369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.383386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:108944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.554 [2024-04-24 22:12:39.383409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.383427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:108952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.554 [2024-04-24 22:12:39.383443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.383460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.554 [2024-04-24 22:12:39.383475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.383491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.554 [2024-04-24 22:12:39.383506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.383523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.554 [2024-04-24 22:12:39.383538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.383554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.554 [2024-04-24 22:12:39.383573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.383590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:108992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.554 [2024-04-24 22:12:39.383605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.383622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.554 [2024-04-24 22:12:39.383638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.383654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:109008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.554 [2024-04-24 22:12:39.383669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.383685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.554 [2024-04-24 22:12:39.383701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.383718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:109024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.554 [2024-04-24 22:12:39.383733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.383771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.554 [2024-04-24 22:12:39.383789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109032 len:8 PRP1 0x0 PRP2 0x0 00:21:07.554 [2024-04-24 22:12:39.383804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.384086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.554 [2024-04-24 22:12:39.384108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.554 [2024-04-24 22:12:39.384121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109040 len:8 PRP1 0x0 PRP2 0x0 00:21:07.554 [2024-04-24 22:12:39.384135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.384153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.554 [2024-04-24 22:12:39.384166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.554 [2024-04-24 22:12:39.384178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109048 len:8 PRP1 0x0 PRP2 0x0 00:21:07.554 [2024-04-24 22:12:39.384192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.384207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.554 [2024-04-24 22:12:39.384219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.554 [2024-04-24 22:12:39.384231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109056 len:8 PRP1 0x0 PRP2 0x0 00:21:07.554 [2024-04-24 22:12:39.384245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.384260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.554 [2024-04-24 22:12:39.384272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.554 [2024-04-24 22:12:39.384284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109064 len:8 PRP1 0x0 PRP2 0x0 00:21:07.554 [2024-04-24 22:12:39.384303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.384319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.554 [2024-04-24 22:12:39.384331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.554 [2024-04-24 22:12:39.384343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109072 len:8 PRP1 0x0 PRP2 0x0 00:21:07.554 [2024-04-24 22:12:39.384357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.384372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.554 [2024-04-24 22:12:39.384384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.554 [2024-04-24 22:12:39.384405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109080 len:8 PRP1 0x0 PRP2 0x0 00:21:07.554 [2024-04-24 22:12:39.384421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.384436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.554 [2024-04-24 22:12:39.384449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.554 [2024-04-24 22:12:39.384461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109088 len:8 PRP1 0x0 PRP2 0x0 00:21:07.554 [2024-04-24 22:12:39.384475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.384490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.554 [2024-04-24 22:12:39.384502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.554 [2024-04-24 22:12:39.384514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108072 len:8 PRP1 0x0 PRP2 0x0 00:21:07.554 [2024-04-24 22:12:39.384527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.384542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.554 [2024-04-24 22:12:39.384554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.554 [2024-04-24 22:12:39.384566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108080 len:8 PRP1 0x0 PRP2 0x0 00:21:07.554 [2024-04-24 22:12:39.384580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.384594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.554 [2024-04-24 22:12:39.384605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.554 [2024-04-24 22:12:39.384618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108088 len:8 PRP1 0x0 PRP2 0x0 00:21:07.554 [2024-04-24 22:12:39.384632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.384646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.554 [2024-04-24 22:12:39.384657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.554 [2024-04-24 22:12:39.384669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108096 len:8 PRP1 0x0 PRP2 0x0 00:21:07.554 [2024-04-24 22:12:39.384683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.384697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.554 [2024-04-24 22:12:39.384709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.554 [2024-04-24 22:12:39.384726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108104 len:8 PRP1 0x0 PRP2 0x0 00:21:07.554 [2024-04-24 22:12:39.384741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.384756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.554 [2024-04-24 22:12:39.384768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.554 [2024-04-24 22:12:39.384780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108112 len:8 PRP1 0x0 PRP2 0x0 00:21:07.554 [2024-04-24 22:12:39.384794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.554 [2024-04-24 22:12:39.384808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.554 [2024-04-24 22:12:39.384819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.554 [2024-04-24 22:12:39.384832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108120 len:8 PRP1 0x0 PRP2 0x0 00:21:07.555 [2024-04-24 22:12:39.384845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.555 [2024-04-24 22:12:39.384860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.555 [2024-04-24 22:12:39.384872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.555 [2024-04-24 22:12:39.384884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108128 len:8 PRP1 0x0 PRP2 0x0 00:21:07.555 [2024-04-24 22:12:39.384898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.555 [2024-04-24 22:12:39.384912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.555 [2024-04-24 22:12:39.384924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.555 [2024-04-24 22:12:39.384936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108136 len:8 PRP1 0x0 PRP2 0x0 00:21:07.555 [2024-04-24 22:12:39.384950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.555 [2024-04-24 22:12:39.384964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.555 [2024-04-24 22:12:39.384976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.555 [2024-04-24 22:12:39.384988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108144 len:8 PRP1 0x0 PRP2 0x0 00:21:07.555 [2024-04-24 22:12:39.385001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.555 [2024-04-24 22:12:39.385015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.555 [2024-04-24 22:12:39.385027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.555 [2024-04-24 22:12:39.385040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108152 len:8 PRP1 0x0 PRP2 0x0 00:21:07.555 [2024-04-24 22:12:39.385054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.555 [2024-04-24 22:12:39.385068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.555 [2024-04-24 22:12:39.385080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.555 [2024-04-24 22:12:39.385092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108160 len:8 PRP1 0x0 PRP2 0x0 00:21:07.555 [2024-04-24 22:12:39.385106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.555 [2024-04-24 22:12:39.385124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.555 [2024-04-24 22:12:39.385137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.555 [2024-04-24 22:12:39.385150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108168 len:8 PRP1 0x0 PRP2 0x0 00:21:07.555 [2024-04-24 22:12:39.385163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.555 [2024-04-24 22:12:39.385177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.555 [2024-04-24 22:12:39.385189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.555 [2024-04-24 22:12:39.385202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108176 len:8 PRP1 0x0 PRP2 0x0 00:21:07.555 [2024-04-24 22:12:39.385215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.555 [2024-04-24 22:12:39.385229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.555 [2024-04-24 22:12:39.385241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.555 [2024-04-24 22:12:39.385254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108184 len:8 PRP1 0x0 PRP2 0x0 00:21:07.555 [2024-04-24 22:12:39.385267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.555 [2024-04-24 22:12:39.385282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.555 [2024-04-24 22:12:39.385294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.555 [2024-04-24 22:12:39.385306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108192 len:8 PRP1 0x0 PRP2 0x0 00:21:07.555 [2024-04-24 22:12:39.385320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.555 [2024-04-24 22:12:39.385334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.555 [2024-04-24 22:12:39.385345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.555 [2024-04-24 22:12:39.385358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108200 len:8 PRP1 0x0 PRP2 0x0 00:21:07.555 [2024-04-24 22:12:39.385371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.555 [2024-04-24 22:12:39.385385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.555 [2024-04-24 22:12:39.385405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.555 [2024-04-24 22:12:39.385419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108208 len:8 PRP1 0x0 PRP2 0x0 00:21:07.555 [2024-04-24 22:12:39.385433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.555 [2024-04-24 22:12:39.385447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.555 [2024-04-24 22:12:39.385459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.555 [2024-04-24 22:12:39.385471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108216 len:8 PRP1 0x0 PRP2 0x0 00:21:07.555 [2024-04-24 22:12:39.385485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.555 [2024-04-24 22:12:39.385499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.555 [2024-04-24 22:12:39.385511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.555 [2024-04-24 22:12:39.385523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108224 len:8 PRP1 0x0 PRP2 0x0 00:21:07.555 [2024-04-24 22:12:39.385541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.555 [2024-04-24 22:12:39.385556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.555 [2024-04-24 22:12:39.385568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.555 [2024-04-24 22:12:39.385580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108232 len:8 PRP1 0x0 PRP2 0x0 00:21:07.555 [2024-04-24 22:12:39.385594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.555 [2024-04-24 22:12:39.385608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.555 [2024-04-24 22:12:39.385621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.555 [2024-04-24 22:12:39.385633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108240 len:8 PRP1 0x0 PRP2 0x0 00:21:07.555 [2024-04-24 22:12:39.385646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.555 [2024-04-24 22:12:39.385661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.555 [2024-04-24 22:12:39.385672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.555 [2024-04-24 22:12:39.385685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108248 len:8 PRP1 0x0 PRP2 0x0 00:21:07.555 [2024-04-24 22:12:39.385698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.555 [2024-04-24 22:12:39.385712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.555 [2024-04-24 22:12:39.385724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.555 [2024-04-24 22:12:39.385736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108256 len:8 PRP1 0x0 PRP2 0x0 00:21:07.555 [2024-04-24 22:12:39.385750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.555 [2024-04-24 22:12:39.385764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.555 [2024-04-24 22:12:39.385776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.555 [2024-04-24 22:12:39.385788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108264 len:8 PRP1 0x0 PRP2 0x0 00:21:07.555 [2024-04-24 22:12:39.385802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.555 [2024-04-24 22:12:39.385816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.555 [2024-04-24 22:12:39.385828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.555 [2024-04-24 22:12:39.385840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108272 len:8 PRP1 0x0 PRP2 0x0 00:21:07.555 [2024-04-24 22:12:39.385853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.555 [2024-04-24 22:12:39.385867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.555 [2024-04-24 22:12:39.385879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.555 [2024-04-24 22:12:39.385892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108280 len:8 PRP1 0x0 PRP2 0x0 00:21:07.555 [2024-04-24 22:12:39.385905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.555 [2024-04-24 22:12:39.385919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.555 [2024-04-24 22:12:39.385931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.555 [2024-04-24 22:12:39.385950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108288 len:8 PRP1 0x0 PRP2 0x0 00:21:07.555 [2024-04-24 22:12:39.385965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.555 [2024-04-24 22:12:39.385980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.555 [2024-04-24 22:12:39.385991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.555 [2024-04-24 22:12:39.386004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108296 len:8 PRP1 0x0 PRP2 0x0 00:21:07.555 [2024-04-24 22:12:39.386017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.555 [2024-04-24 22:12:39.386031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.555 [2024-04-24 22:12:39.386043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.555 [2024-04-24 22:12:39.386055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108304 len:8 PRP1 0x0 PRP2 0x0 00:21:07.555 [2024-04-24 22:12:39.386069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.555 [2024-04-24 22:12:39.386083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.556 [2024-04-24 22:12:39.386095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.556 [2024-04-24 22:12:39.386107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108312 len:8 PRP1 0x0 PRP2 0x0 00:21:07.556 [2024-04-24 22:12:39.386120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.556 [2024-04-24 22:12:39.386134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.556 [2024-04-24 22:12:39.386155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.556 [2024-04-24 22:12:39.386168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108320 len:8 PRP1 0x0 PRP2 0x0 00:21:07.556 [2024-04-24 22:12:39.386182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.556 [2024-04-24 22:12:39.386196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.556 [2024-04-24 22:12:39.386208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.556 [2024-04-24 22:12:39.386220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108328 len:8 PRP1 0x0 PRP2 0x0 00:21:07.556 [2024-04-24 22:12:39.386234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.556 [2024-04-24 22:12:39.386248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.556 [2024-04-24 22:12:39.386260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.556 [2024-04-24 22:12:39.386273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108336 len:8 PRP1 0x0 PRP2 0x0 00:21:07.556 [2024-04-24 22:12:39.386287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.556 [2024-04-24 22:12:39.386301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.556 [2024-04-24 22:12:39.386313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.556 [2024-04-24 22:12:39.386325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108344 len:8 PRP1 0x0 PRP2 0x0 00:21:07.556 [2024-04-24 22:12:39.386338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.556 [2024-04-24 22:12:39.386352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.556 [2024-04-24 22:12:39.386367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.556 [2024-04-24 22:12:39.386380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108352 len:8 PRP1 0x0 PRP2 0x0 00:21:07.556 [2024-04-24 22:12:39.386400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.556 [2024-04-24 22:12:39.386416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.556 [2024-04-24 22:12:39.386428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.556 [2024-04-24 22:12:39.386440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108360 len:8 PRP1 0x0 PRP2 0x0 00:21:07.556 [2024-04-24 22:12:39.386454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.556 [2024-04-24 22:12:39.386468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.556 [2024-04-24 22:12:39.386480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.556 [2024-04-24 22:12:39.386492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108368 len:8 PRP1 0x0 PRP2 0x0 00:21:07.556 [2024-04-24 22:12:39.386505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.556 [2024-04-24 22:12:39.386519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.556 [2024-04-24 22:12:39.386531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.556 [2024-04-24 22:12:39.386543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108376 len:8 PRP1 0x0 PRP2 0x0 00:21:07.556 [2024-04-24 22:12:39.386558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.556 [2024-04-24 22:12:39.386572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.556 [2024-04-24 22:12:39.386589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.556 [2024-04-24 22:12:39.386602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108384 len:8 PRP1 0x0 PRP2 0x0 00:21:07.556 [2024-04-24 22:12:39.386616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.556 [2024-04-24 22:12:39.386630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.556 [2024-04-24 22:12:39.386642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.556 [2024-04-24 22:12:39.386653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108392 len:8 PRP1 0x0 PRP2 0x0 00:21:07.556 [2024-04-24 22:12:39.386667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.556 [2024-04-24 22:12:39.386681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.556 [2024-04-24 22:12:39.386693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.556 [2024-04-24 22:12:39.386705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108400 len:8 PRP1 0x0 PRP2 0x0 00:21:07.556 [2024-04-24 22:12:39.386718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.556 [2024-04-24 22:12:39.386732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.556 [2024-04-24 22:12:39.386744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.556 [2024-04-24 22:12:39.386756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108408 len:8 PRP1 0x0 PRP2 0x0 00:21:07.556 [2024-04-24 22:12:39.386769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.556 [2024-04-24 22:12:39.386788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.556 [2024-04-24 22:12:39.386800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.556 [2024-04-24 22:12:39.386812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108416 len:8 PRP1 0x0 PRP2 0x0 00:21:07.556 [2024-04-24 22:12:39.386826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.556 [2024-04-24 22:12:39.386840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.556 [2024-04-24 22:12:39.386852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.556 [2024-04-24 22:12:39.386864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108424 len:8 PRP1 0x0 PRP2 0x0 00:21:07.556 [2024-04-24 22:12:39.386877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.556 [2024-04-24 22:12:39.386891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.556 [2024-04-24 22:12:39.386903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.556 [2024-04-24 22:12:39.386915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108432 len:8 PRP1 0x0 PRP2 0x0 00:21:07.556 [2024-04-24 22:12:39.386929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.556 [2024-04-24 22:12:39.386943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.556 [2024-04-24 22:12:39.386954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.556 [2024-04-24 22:12:39.386967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108440 len:8 PRP1 0x0 PRP2 0x0 00:21:07.556 [2024-04-24 22:12:39.386980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.556 [2024-04-24 22:12:39.386994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.556 [2024-04-24 22:12:39.387011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.556 [2024-04-24 22:12:39.387024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108448 len:8 PRP1 0x0 PRP2 0x0 00:21:07.556 [2024-04-24 22:12:39.387038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.556 [2024-04-24 22:12:39.387052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.556 [2024-04-24 22:12:39.387063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.556 [2024-04-24 22:12:39.387075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108456 len:8 PRP1 0x0 PRP2 0x0 00:21:07.556 [2024-04-24 22:12:39.387088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.556 [2024-04-24 22:12:39.387102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.556 [2024-04-24 22:12:39.387114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.556 [2024-04-24 22:12:39.387126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108464 len:8 PRP1 0x0 PRP2 0x0 00:21:07.556 [2024-04-24 22:12:39.387139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.556 [2024-04-24 22:12:39.387153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.557 [2024-04-24 22:12:39.387164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.557 [2024-04-24 22:12:39.387180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108472 len:8 PRP1 0x0 PRP2 0x0 00:21:07.557 [2024-04-24 22:12:39.387194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.557 [2024-04-24 22:12:39.387208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.557 [2024-04-24 22:12:39.387220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.557 [2024-04-24 22:12:39.387232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108480 len:8 PRP1 0x0 PRP2 0x0 00:21:07.557 [2024-04-24 22:12:39.387245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.557 [2024-04-24 22:12:39.387259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.557 [2024-04-24 22:12:39.387271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.557 [2024-04-24 22:12:39.387282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108488 len:8 PRP1 0x0 PRP2 0x0 00:21:07.557 [2024-04-24 22:12:39.387296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.557 [2024-04-24 22:12:39.387310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.557 [2024-04-24 22:12:39.387321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.557 [2024-04-24 22:12:39.387333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108496 len:8 PRP1 0x0 PRP2 0x0 00:21:07.557 [2024-04-24 22:12:39.387347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.557 [2024-04-24 22:12:39.387361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.557 [2024-04-24 22:12:39.387373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.557 [2024-04-24 22:12:39.387385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108504 len:8 PRP1 0x0 PRP2 0x0 00:21:07.557 [2024-04-24 22:12:39.387405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.557 [2024-04-24 22:12:39.387420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.557 [2024-04-24 22:12:39.387446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.557 [2024-04-24 22:12:39.387476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108512 len:8 PRP1 0x0 PRP2 0x0 00:21:07.557 [2024-04-24 22:12:39.387491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.557 [2024-04-24 22:12:39.387507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.557 [2024-04-24 22:12:39.387519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.557 [2024-04-24 22:12:39.387531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108520 len:8 PRP1 0x0 PRP2 0x0 00:21:07.557 [2024-04-24 22:12:39.387548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.557 [2024-04-24 22:12:39.387562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.557 [2024-04-24 22:12:39.387574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.557 [2024-04-24 22:12:39.387585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108528 len:8 PRP1 0x0 PRP2 0x0 00:21:07.557 [2024-04-24 22:12:39.387599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.557 [2024-04-24 22:12:39.387614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.557 [2024-04-24 22:12:39.387630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.557 [2024-04-24 22:12:39.387642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108536 len:8 PRP1 0x0 PRP2 0x0 00:21:07.557 [2024-04-24 22:12:39.387656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.557 [2024-04-24 22:12:39.387670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.557 [2024-04-24 22:12:39.387682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.557 [2024-04-24 22:12:39.387694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108544 len:8 PRP1 0x0 PRP2 0x0 00:21:07.557 [2024-04-24 22:12:39.387708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.557 [2024-04-24 22:12:39.387722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.557 [2024-04-24 22:12:39.387733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.557 [2024-04-24 22:12:39.387746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108552 len:8 PRP1 0x0 PRP2 0x0 00:21:07.557 [2024-04-24 22:12:39.387759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.557 [2024-04-24 22:12:39.387773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.557 [2024-04-24 22:12:39.387785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.557 [2024-04-24 22:12:39.387797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108560 len:8 PRP1 0x0 PRP2 0x0 00:21:07.557 [2024-04-24 22:12:39.387810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.557 [2024-04-24 22:12:39.387824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.557 [2024-04-24 22:12:39.387836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.557 [2024-04-24 22:12:39.387848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108568 len:8 PRP1 0x0 PRP2 0x0 00:21:07.557 [2024-04-24 22:12:39.387862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.557 [2024-04-24 22:12:39.387877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.557 [2024-04-24 22:12:39.387889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.557 [2024-04-24 22:12:39.394352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108576 len:8 PRP1 0x0 PRP2 0x0 00:21:07.557 [2024-04-24 22:12:39.394385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.557 [2024-04-24 22:12:39.394417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.557 [2024-04-24 22:12:39.394431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.557 [2024-04-24 22:12:39.394443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108584 len:8 PRP1 0x0 PRP2 0x0 00:21:07.557 [2024-04-24 22:12:39.394457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.557 [2024-04-24 22:12:39.394471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.557 [2024-04-24 22:12:39.394483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.557 [2024-04-24 22:12:39.394495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108592 len:8 PRP1 0x0 PRP2 0x0 00:21:07.557 [2024-04-24 22:12:39.394508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.557 [2024-04-24 22:12:39.394529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.557 [2024-04-24 22:12:39.394541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.557 [2024-04-24 22:12:39.394553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108600 len:8 PRP1 0x0 PRP2 0x0 00:21:07.557 [2024-04-24 22:12:39.394567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.557 [2024-04-24 22:12:39.394580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.557 [2024-04-24 22:12:39.394592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.557 [2024-04-24 22:12:39.394604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108608 len:8 PRP1 0x0 PRP2 0x0 00:21:07.557 [2024-04-24 22:12:39.394618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.557 [2024-04-24 22:12:39.394632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.557 [2024-04-24 22:12:39.394643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.557 [2024-04-24 22:12:39.394655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108616 len:8 PRP1 0x0 PRP2 0x0 00:21:07.557 [2024-04-24 22:12:39.394669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.557 [2024-04-24 22:12:39.394682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.557 [2024-04-24 22:12:39.394694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.557 [2024-04-24 22:12:39.394706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108624 len:8 PRP1 0x0 PRP2 0x0 00:21:07.557 [2024-04-24 22:12:39.394719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.557 [2024-04-24 22:12:39.394733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.557 [2024-04-24 22:12:39.394745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.557 [2024-04-24 22:12:39.394757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108632 len:8 PRP1 0x0 PRP2 0x0 00:21:07.557 [2024-04-24 22:12:39.394770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.557 [2024-04-24 22:12:39.394784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.557 [2024-04-24 22:12:39.394795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.557 [2024-04-24 22:12:39.394807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108640 len:8 PRP1 0x0 PRP2 0x0 00:21:07.558 [2024-04-24 22:12:39.394821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.558 [2024-04-24 22:12:39.394834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.558 [2024-04-24 22:12:39.394846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.558 [2024-04-24 22:12:39.394858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108648 len:8 PRP1 0x0 PRP2 0x0 00:21:07.558 [2024-04-24 22:12:39.394871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.558 [2024-04-24 22:12:39.394885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.558 [2024-04-24 22:12:39.394896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.558 [2024-04-24 22:12:39.394908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108656 len:8 PRP1 0x0 PRP2 0x0 00:21:07.558 [2024-04-24 22:12:39.394925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.558 [2024-04-24 22:12:39.394940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.558 [2024-04-24 22:12:39.394952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.558 [2024-04-24 22:12:39.394964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108664 len:8 PRP1 0x0 PRP2 0x0 00:21:07.558 [2024-04-24 22:12:39.394977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.558 [2024-04-24 22:12:39.394991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.558 [2024-04-24 22:12:39.395002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.558 [2024-04-24 22:12:39.395014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108672 len:8 PRP1 0x0 PRP2 0x0 00:21:07.558 [2024-04-24 22:12:39.395028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.558 [2024-04-24 22:12:39.395042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.558 [2024-04-24 22:12:39.395053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.558 [2024-04-24 22:12:39.395065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108680 len:8 PRP1 0x0 PRP2 0x0 00:21:07.558 [2024-04-24 22:12:39.395079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.558 [2024-04-24 22:12:39.395092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.558 [2024-04-24 22:12:39.395104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.558 [2024-04-24 22:12:39.395116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108688 len:8 PRP1 0x0 PRP2 0x0 00:21:07.558 [2024-04-24 22:12:39.395129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.558 [2024-04-24 22:12:39.395143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.558 [2024-04-24 22:12:39.395154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.558 [2024-04-24 22:12:39.395166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108696 len:8 PRP1 0x0 PRP2 0x0 00:21:07.558 [2024-04-24 22:12:39.395180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.558 [2024-04-24 22:12:39.395193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.558 [2024-04-24 22:12:39.395205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.558 [2024-04-24 22:12:39.395217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108704 len:8 PRP1 0x0 PRP2 0x0 00:21:07.558 [2024-04-24 22:12:39.395230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.558 [2024-04-24 22:12:39.395244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.558 [2024-04-24 22:12:39.395255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.558 [2024-04-24 22:12:39.395267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108712 len:8 PRP1 0x0 PRP2 0x0 00:21:07.558 [2024-04-24 22:12:39.395280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.558 [2024-04-24 22:12:39.395294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.558 [2024-04-24 22:12:39.395306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.558 [2024-04-24 22:12:39.395321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108720 len:8 PRP1 0x0 PRP2 0x0 00:21:07.558 [2024-04-24 22:12:39.395335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.558 [2024-04-24 22:12:39.395349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.558 [2024-04-24 22:12:39.395361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.558 [2024-04-24 22:12:39.395372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108728 len:8 PRP1 0x0 PRP2 0x0 00:21:07.558 [2024-04-24 22:12:39.395386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.558 [2024-04-24 22:12:39.395409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.558 [2024-04-24 22:12:39.395423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.558 [2024-04-24 22:12:39.395435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108736 len:8 PRP1 0x0 PRP2 0x0 00:21:07.558 [2024-04-24 22:12:39.395448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.558 [2024-04-24 22:12:39.395462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.558 [2024-04-24 22:12:39.395473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.558 [2024-04-24 22:12:39.395485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108744 len:8 PRP1 0x0 PRP2 0x0 00:21:07.558 [2024-04-24 22:12:39.395499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.558 [2024-04-24 22:12:39.395513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.558 [2024-04-24 22:12:39.395524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.558 [2024-04-24 22:12:39.395536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108752 len:8 PRP1 0x0 PRP2 0x0 00:21:07.558 [2024-04-24 22:12:39.395549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.558 [2024-04-24 22:12:39.395563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.558 [2024-04-24 22:12:39.395574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.558 [2024-04-24 22:12:39.395587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108760 len:8 PRP1 0x0 PRP2 0x0 00:21:07.558 [2024-04-24 22:12:39.395600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.558 [2024-04-24 22:12:39.395614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.558 [2024-04-24 22:12:39.395626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.558 [2024-04-24 22:12:39.395638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108768 len:8 PRP1 0x0 PRP2 0x0 00:21:07.558 [2024-04-24 22:12:39.395651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.558 [2024-04-24 22:12:39.395665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.558 [2024-04-24 22:12:39.395677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.558 [2024-04-24 22:12:39.395689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108776 len:8 PRP1 0x0 PRP2 0x0 00:21:07.558 [2024-04-24 22:12:39.395702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.558 [2024-04-24 22:12:39.395720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.558 [2024-04-24 22:12:39.395732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.558 [2024-04-24 22:12:39.395744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108784 len:8 PRP1 0x0 PRP2 0x0 00:21:07.558 [2024-04-24 22:12:39.395758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.558 [2024-04-24 22:12:39.395772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.558 [2024-04-24 22:12:39.395784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.558 [2024-04-24 22:12:39.395795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108792 len:8 PRP1 0x0 PRP2 0x0 00:21:07.558 [2024-04-24 22:12:39.395809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.558 [2024-04-24 22:12:39.395823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.558 [2024-04-24 22:12:39.395834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.558 [2024-04-24 22:12:39.395846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108800 len:8 PRP1 0x0 PRP2 0x0 00:21:07.558 [2024-04-24 22:12:39.395859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.558 [2024-04-24 22:12:39.395873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.558 [2024-04-24 22:12:39.395885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.558 [2024-04-24 22:12:39.395897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108808 len:8 PRP1 0x0 PRP2 0x0 00:21:07.558 [2024-04-24 22:12:39.395911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.558 [2024-04-24 22:12:39.395925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.558 [2024-04-24 22:12:39.395937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.558 [2024-04-24 22:12:39.395949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108816 len:8 PRP1 0x0 PRP2 0x0 00:21:07.558 [2024-04-24 22:12:39.395963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.558 [2024-04-24 22:12:39.395977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.558 [2024-04-24 22:12:39.395988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.559 [2024-04-24 22:12:39.396000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108824 len:8 PRP1 0x0 PRP2 0x0 00:21:07.559 [2024-04-24 22:12:39.396013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.559 [2024-04-24 22:12:39.396027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.559 [2024-04-24 22:12:39.396039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.559 [2024-04-24 22:12:39.396051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108832 len:8 PRP1 0x0 PRP2 0x0 00:21:07.559 [2024-04-24 22:12:39.396064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.559 [2024-04-24 22:12:39.396078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.559 [2024-04-24 22:12:39.396089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.559 [2024-04-24 22:12:39.396101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108840 len:8 PRP1 0x0 PRP2 0x0 00:21:07.559 [2024-04-24 22:12:39.396118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.559 [2024-04-24 22:12:39.396133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.559 [2024-04-24 22:12:39.396144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.559 [2024-04-24 22:12:39.396156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108848 len:8 PRP1 0x0 PRP2 0x0 00:21:07.559 [2024-04-24 22:12:39.396170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.559 [2024-04-24 22:12:39.396183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.559 [2024-04-24 22:12:39.396195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.559 [2024-04-24 22:12:39.396207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108856 len:8 PRP1 0x0 PRP2 0x0 00:21:07.559 [2024-04-24 22:12:39.396220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.559 [2024-04-24 22:12:39.396234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.559 [2024-04-24 22:12:39.396246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.559 [2024-04-24 22:12:39.396258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108864 len:8 PRP1 0x0 PRP2 0x0 00:21:07.559 [2024-04-24 22:12:39.396271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.559 [2024-04-24 22:12:39.396285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.559 [2024-04-24 22:12:39.396296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.559 [2024-04-24 22:12:39.396308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108872 len:8 PRP1 0x0 PRP2 0x0 00:21:07.559 [2024-04-24 22:12:39.396321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.559 [2024-04-24 22:12:39.396335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.559 [2024-04-24 22:12:39.396347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.559 [2024-04-24 22:12:39.396359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108880 len:8 PRP1 0x0 PRP2 0x0 00:21:07.559 [2024-04-24 22:12:39.396372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.559 [2024-04-24 22:12:39.396386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.559 [2024-04-24 22:12:39.396405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.559 [2024-04-24 22:12:39.396417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108888 len:8 PRP1 0x0 PRP2 0x0 00:21:07.559 [2024-04-24 22:12:39.396432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.559 [2024-04-24 22:12:39.396446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.559 [2024-04-24 22:12:39.396457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.559 [2024-04-24 22:12:39.396469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108896 len:8 PRP1 0x0 PRP2 0x0 00:21:07.559 [2024-04-24 22:12:39.396482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.559 [2024-04-24 22:12:39.396496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.559 [2024-04-24 22:12:39.396508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.559 [2024-04-24 22:12:39.396524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108904 len:8 PRP1 0x0 PRP2 0x0 00:21:07.559 [2024-04-24 22:12:39.396538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.559 [2024-04-24 22:12:39.396552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.559 [2024-04-24 22:12:39.396563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.559 [2024-04-24 22:12:39.396576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108912 len:8 PRP1 0x0 PRP2 0x0 00:21:07.559 [2024-04-24 22:12:39.396589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.559 [2024-04-24 22:12:39.396603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.559 [2024-04-24 22:12:39.396614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.559 [2024-04-24 22:12:39.396626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108920 len:8 PRP1 0x0 PRP2 0x0 00:21:07.559 [2024-04-24 22:12:39.396640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.559 [2024-04-24 22:12:39.396653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.559 [2024-04-24 22:12:39.396665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.559 [2024-04-24 22:12:39.396677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108928 len:8 PRP1 0x0 PRP2 0x0 00:21:07.559 [2024-04-24 22:12:39.396690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.559 [2024-04-24 22:12:39.396704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.559 [2024-04-24 22:12:39.396716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.559 [2024-04-24 22:12:39.396728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108936 len:8 PRP1 0x0 PRP2 0x0 00:21:07.559 [2024-04-24 22:12:39.396741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.559 [2024-04-24 22:12:39.396755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.559 [2024-04-24 22:12:39.396767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.559 [2024-04-24 22:12:39.396779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108944 len:8 PRP1 0x0 PRP2 0x0 00:21:07.559 [2024-04-24 22:12:39.396792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.559 [2024-04-24 22:12:39.396806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.559 [2024-04-24 22:12:39.396818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.559 [2024-04-24 22:12:39.396830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108952 len:8 PRP1 0x0 PRP2 0x0 00:21:07.559 [2024-04-24 22:12:39.396844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.559 [2024-04-24 22:12:39.396857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.559 [2024-04-24 22:12:39.396869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.559 [2024-04-24 22:12:39.396881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108960 len:8 PRP1 0x0 PRP2 0x0 00:21:07.559 [2024-04-24 22:12:39.396894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.559 [2024-04-24 22:12:39.396908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.559 [2024-04-24 22:12:39.396923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.559 [2024-04-24 22:12:39.396936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108968 len:8 PRP1 0x0 PRP2 0x0 00:21:07.559 [2024-04-24 22:12:39.396949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.559 [2024-04-24 22:12:39.396963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.559 [2024-04-24 22:12:39.396975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.559 [2024-04-24 22:12:39.396987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108976 len:8 PRP1 0x0 PRP2 0x0 00:21:07.559 [2024-04-24 22:12:39.397000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.559 [2024-04-24 22:12:39.397014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.559 [2024-04-24 22:12:39.397025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.559 [2024-04-24 22:12:39.397037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108984 len:8 PRP1 0x0 PRP2 0x0 00:21:07.559 [2024-04-24 22:12:39.397051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.559 [2024-04-24 22:12:39.397064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.559 [2024-04-24 22:12:39.397076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.559 [2024-04-24 22:12:39.397088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108992 len:8 PRP1 0x0 PRP2 0x0 00:21:07.559 [2024-04-24 22:12:39.397101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.559 [2024-04-24 22:12:39.397115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.559 [2024-04-24 22:12:39.397126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.559 [2024-04-24 22:12:39.397138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109000 len:8 PRP1 0x0 PRP2 0x0 00:21:07.559 [2024-04-24 22:12:39.397151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.559 [2024-04-24 22:12:39.397165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.560 [2024-04-24 22:12:39.397177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.560 [2024-04-24 22:12:39.397189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109008 len:8 PRP1 0x0 PRP2 0x0 00:21:07.560 [2024-04-24 22:12:39.397202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:39.397216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.560 [2024-04-24 22:12:39.397227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.560 [2024-04-24 22:12:39.397240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109016 len:8 PRP1 0x0 PRP2 0x0 00:21:07.560 [2024-04-24 22:12:39.397253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:39.397267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.560 [2024-04-24 22:12:39.397278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.560 [2024-04-24 22:12:39.397290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109024 len:8 PRP1 0x0 PRP2 0x0 00:21:07.560 [2024-04-24 22:12:39.397304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:39.397322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.560 [2024-04-24 22:12:39.397334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.560 [2024-04-24 22:12:39.397346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109032 len:8 PRP1 0x0 PRP2 0x0 00:21:07.560 [2024-04-24 22:12:39.397360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:39.397438] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11e00c0 was disconnected and freed. reset controller. 00:21:07.560 [2024-04-24 22:12:39.397462] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:07.560 [2024-04-24 22:12:39.397506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.560 [2024-04-24 22:12:39.397526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:39.397543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.560 [2024-04-24 22:12:39.397557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:39.397572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.560 [2024-04-24 22:12:39.397586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:39.397600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.560 [2024-04-24 22:12:39.397614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:39.397628] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:07.560 [2024-04-24 22:12:39.397684] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1016ab0 (9): Bad file descriptor 00:21:07.560 [2024-04-24 22:12:39.401249] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:07.560 [2024-04-24 22:12:39.437775] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:07.560 [2024-04-24 22:12:44.149081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.560 [2024-04-24 22:12:44.149127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:44.149159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.560 [2024-04-24 22:12:44.149178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:44.149197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.560 [2024-04-24 22:12:44.149213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:44.149240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.560 [2024-04-24 22:12:44.149256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:44.149273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.560 [2024-04-24 22:12:44.149295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:44.149314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.560 [2024-04-24 22:12:44.149329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:44.149346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.560 [2024-04-24 22:12:44.149362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:44.149379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.560 [2024-04-24 22:12:44.149401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:44.149421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.560 [2024-04-24 22:12:44.149443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:44.149460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.560 [2024-04-24 22:12:44.149476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:44.149493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.560 [2024-04-24 22:12:44.149508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:44.149525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.560 [2024-04-24 22:12:44.149540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:44.149556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.560 [2024-04-24 22:12:44.149572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:44.149588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.560 [2024-04-24 22:12:44.149603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:44.149620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.560 [2024-04-24 22:12:44.149636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:44.149652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.560 [2024-04-24 22:12:44.149668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:44.149684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.560 [2024-04-24 22:12:44.149700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:44.149723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.560 [2024-04-24 22:12:44.149740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:44.149757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.560 [2024-04-24 22:12:44.149773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:44.149789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.560 [2024-04-24 22:12:44.149804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:44.149822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.560 [2024-04-24 22:12:44.149838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:44.149855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.560 [2024-04-24 22:12:44.149870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:44.149887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.560 [2024-04-24 22:12:44.149902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:44.149919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.560 [2024-04-24 22:12:44.149935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:44.149952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.560 [2024-04-24 22:12:44.149967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.560 [2024-04-24 22:12:44.149984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.149999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.150016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.150031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.150048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.150063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.150080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.150095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.150112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.150136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.150154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.150169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.150186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.150201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.150218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.150233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.150251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.150267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.150283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.150299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.150316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.150331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.150348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.150363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.150380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.150404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.150424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.150451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.150468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.561 [2024-04-24 22:12:44.150483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.150500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.150514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.150531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.150547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.150568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.150584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.150601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.150616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.150633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.150649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.150677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.150692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.150708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.150724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.150750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.150766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.150782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.150798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.150814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.150828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.150845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.150860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.150876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.150891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.150907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.150923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.150939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.150954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.150970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.150985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.151006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.151022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.151039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.151053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.561 [2024-04-24 22:12:44.151070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.561 [2024-04-24 22:12:44.151084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.151101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.151116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.151133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.151148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.151164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.151179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.151196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.151211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.151227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.151242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.151259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.151274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.151291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.151306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.151323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.151338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.151354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.151369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.151385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.151413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.151432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.151448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.151465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.151480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.151496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.151511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.151528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.151543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.151559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.151574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.151591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.151606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.151623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.151638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.151655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.151669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.151686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.151701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.151718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.151733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.151749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.151764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.151781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.151797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.151818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.151834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.151850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.151865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.151882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.151897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.151915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.151929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.151946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.151961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.151978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.151993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.152009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.152025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.152041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.152056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.152073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.152088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.152105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.152120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.152137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.152152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.152170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.152185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.152202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.152222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.152239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.152254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.152270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.152286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.562 [2024-04-24 22:12:44.152302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.562 [2024-04-24 22:12:44.152318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.152335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.563 [2024-04-24 22:12:44.152349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.152366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.563 [2024-04-24 22:12:44.152381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.152404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.563 [2024-04-24 22:12:44.152420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.152444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.563 [2024-04-24 22:12:44.152460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.152477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.563 [2024-04-24 22:12:44.152491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.152508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.563 [2024-04-24 22:12:44.152523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.152539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.563 [2024-04-24 22:12:44.152554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.152571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.563 [2024-04-24 22:12:44.152586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.152602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.563 [2024-04-24 22:12:44.152617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.152637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.563 [2024-04-24 22:12:44.152662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.152679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.563 [2024-04-24 22:12:44.152694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.152710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.563 [2024-04-24 22:12:44.152726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.152742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.563 [2024-04-24 22:12:44.152757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.152774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.563 [2024-04-24 22:12:44.152788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.152805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.563 [2024-04-24 22:12:44.152820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.152836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.563 [2024-04-24 22:12:44.152851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.152868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.563 [2024-04-24 22:12:44.152883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.152900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.563 [2024-04-24 22:12:44.152915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.152932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.563 [2024-04-24 22:12:44.152947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.152964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.563 [2024-04-24 22:12:44.152979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.152996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.563 [2024-04-24 22:12:44.153011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.153027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.563 [2024-04-24 22:12:44.153043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.153064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.563 [2024-04-24 22:12:44.153080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.153097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.563 [2024-04-24 22:12:44.153112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.153129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.563 [2024-04-24 22:12:44.153144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.153161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.563 [2024-04-24 22:12:44.153175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.153192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.563 [2024-04-24 22:12:44.153207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.153224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.563 [2024-04-24 22:12:44.153239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.153256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.563 [2024-04-24 22:12:44.153272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.153288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.563 [2024-04-24 22:12:44.153304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.153321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.563 [2024-04-24 22:12:44.153336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.153352] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a860 is same with the state(5) to be set 00:21:07.563 [2024-04-24 22:12:44.153371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:07.563 [2024-04-24 22:12:44.153384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:07.563 [2024-04-24 22:12:44.153404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21784 len:8 PRP1 0x0 PRP2 0x0 00:21:07.563 [2024-04-24 22:12:44.153419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.153498] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x103a860 was disconnected and freed. reset controller. 00:21:07.563 [2024-04-24 22:12:44.153518] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:07.563 [2024-04-24 22:12:44.153555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.563 [2024-04-24 22:12:44.153580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.153597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.563 [2024-04-24 22:12:44.153611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.153626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.563 [2024-04-24 22:12:44.153640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.153656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.563 [2024-04-24 22:12:44.153670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.563 [2024-04-24 22:12:44.153685] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:07.563 [2024-04-24 22:12:44.153726] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1016ab0 (9): Bad file descriptor 00:21:07.563 [2024-04-24 22:12:44.157326] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:07.563 [2024-04-24 22:12:44.316530] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:07.564 00:21:07.564 Latency(us) 00:21:07.564 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.564 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:07.564 Verification LBA range: start 0x0 length 0x4000 00:21:07.564 NVMe0n1 : 15.02 7666.06 29.95 914.56 0.00 14888.62 867.75 25826.04 00:21:07.564 =================================================================================================================== 00:21:07.564 Total : 7666.06 29.95 914.56 0.00 14888.62 867.75 25826.04 00:21:07.564 Received shutdown signal, test time was about 15.000000 seconds 00:21:07.564 00:21:07.564 Latency(us) 00:21:07.564 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.564 =================================================================================================================== 00:21:07.564 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:07.564 22:12:49 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:07.564 22:12:49 -- host/failover.sh@65 -- # count=3 00:21:07.564 22:12:49 -- host/failover.sh@67 -- # (( count != 3 )) 00:21:07.564 22:12:49 -- host/failover.sh@73 -- # bdevperf_pid=4004068 00:21:07.564 22:12:49 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:07.564 22:12:49 -- host/failover.sh@75 -- # waitforlisten 4004068 /var/tmp/bdevperf.sock 00:21:07.564 22:12:49 -- common/autotest_common.sh@817 -- # '[' -z 4004068 ']' 00:21:07.564 22:12:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:07.564 22:12:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:07.564 22:12:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:07.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:07.564 22:12:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:07.564 22:12:49 -- common/autotest_common.sh@10 -- # set +x 00:21:07.821 22:12:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:07.821 22:12:49 -- common/autotest_common.sh@850 -- # return 0 00:21:07.822 22:12:49 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:08.079 [2024-04-24 22:12:50.225811] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:08.079 22:12:50 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:08.336 [2024-04-24 22:12:50.554814] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:08.336 22:12:50 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:08.900 NVMe0n1 00:21:08.900 22:12:51 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:09.464 00:21:09.464 22:12:51 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:10.028 00:21:10.028 22:12:52 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:10.028 22:12:52 -- host/failover.sh@82 -- # grep -q NVMe0 00:21:10.595 22:12:52 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:10.889 22:12:52 -- host/failover.sh@87 -- # sleep 3 00:21:14.168 22:12:55 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:14.168 22:12:55 -- host/failover.sh@88 -- # grep -q NVMe0 00:21:14.168 22:12:56 -- host/failover.sh@90 -- # run_test_pid=4004862 00:21:14.168 22:12:56 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:14.168 22:12:56 -- host/failover.sh@92 -- # wait 4004862 00:21:15.539 0 00:21:15.539 22:12:57 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:15.539 [2024-04-24 22:12:49.620610] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:21:15.539 [2024-04-24 22:12:49.620729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4004068 ] 00:21:15.539 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.539 [2024-04-24 22:12:49.695716] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.539 [2024-04-24 22:12:49.812421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.539 [2024-04-24 22:12:52.912296] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:15.539 [2024-04-24 22:12:52.912378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:15.539 [2024-04-24 22:12:52.912411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.540 [2024-04-24 22:12:52.912443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:15.540 [2024-04-24 22:12:52.912458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.540 [2024-04-24 22:12:52.912473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:15.540 [2024-04-24 22:12:52.912489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.540 [2024-04-24 22:12:52.912504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:15.540 [2024-04-24 22:12:52.912518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:15.540 [2024-04-24 22:12:52.912534] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:15.540 [2024-04-24 22:12:52.912593] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1cab0 (9): Bad file descriptor 00:21:15.540 [2024-04-24 22:12:52.912628] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:15.540 [2024-04-24 22:12:53.014568] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:15.540 Running I/O for 1 seconds... 00:21:15.540 00:21:15.540 Latency(us) 00:21:15.540 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.540 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:15.540 Verification LBA range: start 0x0 length 0x4000 00:21:15.540 NVMe0n1 : 1.01 7835.83 30.61 0.00 0.00 16268.57 3616.62 13204.29 00:21:15.540 =================================================================================================================== 00:21:15.540 Total : 7835.83 30.61 0.00 0.00 16268.57 3616.62 13204.29 00:21:15.540 22:12:57 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:15.540 22:12:57 -- host/failover.sh@95 -- # grep -q NVMe0 00:21:15.797 22:12:57 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:16.361 22:12:58 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:16.361 22:12:58 -- host/failover.sh@99 -- # grep -q NVMe0 00:21:16.619 22:12:58 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:16.877 22:12:59 -- host/failover.sh@101 -- # sleep 3 00:21:20.152 22:13:02 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:20.152 22:13:02 -- host/failover.sh@103 -- # grep -q NVMe0 00:21:20.152 22:13:02 -- host/failover.sh@108 -- # killprocess 4004068 00:21:20.152 22:13:02 -- common/autotest_common.sh@936 -- # '[' -z 4004068 ']' 00:21:20.152 22:13:02 -- common/autotest_common.sh@940 -- # kill -0 4004068 00:21:20.152 22:13:02 -- common/autotest_common.sh@941 -- # uname 00:21:20.152 22:13:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:20.152 22:13:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4004068 00:21:20.410 22:13:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:20.410 22:13:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:20.410 22:13:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4004068' 00:21:20.410 killing process with pid 4004068 00:21:20.410 22:13:02 -- common/autotest_common.sh@955 -- # kill 4004068 00:21:20.410 22:13:02 -- common/autotest_common.sh@960 -- # wait 4004068 00:21:20.668 22:13:02 -- host/failover.sh@110 -- # sync 00:21:20.668 22:13:02 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:20.925 22:13:03 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:20.925 22:13:03 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:20.925 22:13:03 -- host/failover.sh@116 -- # nvmftestfini 00:21:20.925 22:13:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:20.925 22:13:03 -- nvmf/common.sh@117 -- # sync 00:21:20.925 22:13:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:20.925 22:13:03 -- nvmf/common.sh@120 -- # set +e 00:21:20.925 22:13:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:20.925 22:13:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:20.925 rmmod nvme_tcp 00:21:20.925 rmmod nvme_fabrics 00:21:20.925 rmmod nvme_keyring 00:21:20.925 22:13:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:20.925 22:13:03 -- nvmf/common.sh@124 -- # set -e 00:21:20.925 22:13:03 -- nvmf/common.sh@125 -- # return 0 00:21:20.925 22:13:03 -- nvmf/common.sh@478 -- # '[' -n 4001788 ']' 00:21:20.925 22:13:03 -- nvmf/common.sh@479 -- # killprocess 4001788 00:21:20.925 22:13:03 -- common/autotest_common.sh@936 -- # '[' -z 4001788 ']' 00:21:20.925 22:13:03 -- common/autotest_common.sh@940 -- # kill -0 4001788 00:21:20.925 22:13:03 -- common/autotest_common.sh@941 -- # uname 00:21:20.925 22:13:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:20.925 22:13:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4001788 00:21:20.925 22:13:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:20.925 22:13:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:20.925 22:13:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4001788' 00:21:20.925 killing process with pid 4001788 00:21:20.925 22:13:03 -- common/autotest_common.sh@955 -- # kill 4001788 00:21:20.925 [2024-04-24 22:13:03.169819] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:20.925 22:13:03 -- common/autotest_common.sh@960 -- # wait 4001788 00:21:21.491 22:13:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:21.491 22:13:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:21.491 22:13:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:21.491 22:13:03 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:21.491 22:13:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:21.491 22:13:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.491 22:13:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:21.491 22:13:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.392 22:13:05 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:23.392 00:21:23.392 real 0m38.526s 00:21:23.392 user 2m18.008s 00:21:23.392 sys 0m6.759s 00:21:23.392 22:13:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:23.392 22:13:05 -- common/autotest_common.sh@10 -- # set +x 00:21:23.392 ************************************ 00:21:23.392 END TEST nvmf_failover 00:21:23.392 ************************************ 00:21:23.392 22:13:05 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:23.392 22:13:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:23.392 22:13:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:23.392 22:13:05 -- common/autotest_common.sh@10 -- # set +x 00:21:23.651 ************************************ 00:21:23.651 START TEST nvmf_discovery 00:21:23.651 ************************************ 00:21:23.651 22:13:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:23.651 * Looking for test storage... 00:21:23.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:23.651 22:13:05 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:23.651 22:13:05 -- nvmf/common.sh@7 -- # uname -s 00:21:23.651 22:13:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:23.651 22:13:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:23.651 22:13:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:23.651 22:13:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:23.651 22:13:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:23.651 22:13:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:23.651 22:13:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:23.651 22:13:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:23.651 22:13:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:23.651 22:13:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:23.651 22:13:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:23.651 22:13:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:23.651 22:13:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:23.651 22:13:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:23.651 22:13:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:23.651 22:13:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:23.651 22:13:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:23.651 22:13:05 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:23.651 22:13:05 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:23.651 22:13:05 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:23.651 22:13:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.651 22:13:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.651 22:13:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.651 22:13:05 -- paths/export.sh@5 -- # export PATH 00:21:23.651 22:13:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.651 22:13:05 -- nvmf/common.sh@47 -- # : 0 00:21:23.651 22:13:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:23.651 22:13:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:23.651 22:13:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:23.651 22:13:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:23.651 22:13:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:23.651 22:13:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:23.651 22:13:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:23.651 22:13:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:23.651 22:13:05 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:23.651 22:13:05 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:23.651 22:13:05 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:23.651 22:13:05 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:23.651 22:13:05 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:23.651 22:13:05 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:23.651 22:13:05 -- host/discovery.sh@25 -- # nvmftestinit 00:21:23.651 22:13:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:23.651 22:13:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:23.651 22:13:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:23.651 22:13:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:23.651 22:13:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:23.651 22:13:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.651 22:13:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:23.651 22:13:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.651 22:13:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:23.651 22:13:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:23.651 22:13:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:23.651 22:13:05 -- common/autotest_common.sh@10 -- # set +x 00:21:26.181 22:13:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:26.181 22:13:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:26.181 22:13:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:26.181 22:13:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:26.181 22:13:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:26.181 22:13:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:26.181 22:13:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:26.181 22:13:08 -- nvmf/common.sh@295 -- # net_devs=() 00:21:26.181 22:13:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:26.181 22:13:08 -- nvmf/common.sh@296 -- # e810=() 00:21:26.181 22:13:08 -- nvmf/common.sh@296 -- # local -ga e810 00:21:26.181 22:13:08 -- nvmf/common.sh@297 -- # x722=() 00:21:26.181 22:13:08 -- nvmf/common.sh@297 -- # local -ga x722 00:21:26.181 22:13:08 -- nvmf/common.sh@298 -- # mlx=() 00:21:26.181 22:13:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:26.181 22:13:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:26.182 22:13:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:26.182 22:13:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:26.182 22:13:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:26.182 22:13:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:26.182 22:13:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:26.182 22:13:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:26.182 22:13:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:26.182 22:13:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:26.182 22:13:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:26.182 22:13:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:26.182 22:13:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:26.182 22:13:08 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:26.182 22:13:08 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:26.182 22:13:08 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:26.182 22:13:08 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:26.182 22:13:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:26.182 22:13:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:26.182 22:13:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:26.182 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:26.182 22:13:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:26.182 22:13:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:26.182 22:13:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.182 22:13:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.182 22:13:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:26.182 22:13:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:26.182 22:13:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:26.182 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:26.182 22:13:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:26.182 22:13:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:26.182 22:13:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.182 22:13:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.182 22:13:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:26.182 22:13:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:26.182 22:13:08 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:26.182 22:13:08 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:26.182 22:13:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:26.182 22:13:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.182 22:13:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:26.182 22:13:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.182 22:13:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:26.182 Found net devices under 0000:84:00.0: cvl_0_0 00:21:26.182 22:13:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.182 22:13:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:26.182 22:13:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.182 22:13:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:26.182 22:13:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.182 22:13:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:26.182 Found net devices under 0000:84:00.1: cvl_0_1 00:21:26.182 22:13:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.182 22:13:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:26.182 22:13:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:26.182 22:13:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:26.182 22:13:08 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:26.182 22:13:08 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:26.182 22:13:08 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:26.182 22:13:08 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:26.182 22:13:08 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:26.182 22:13:08 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:26.182 22:13:08 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:26.182 22:13:08 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:26.182 22:13:08 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:26.182 22:13:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:26.182 22:13:08 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:26.182 22:13:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:26.182 22:13:08 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:26.182 22:13:08 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:26.182 22:13:08 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:26.182 22:13:08 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:26.182 22:13:08 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:26.182 22:13:08 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:26.182 22:13:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:26.182 22:13:08 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:26.182 22:13:08 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:26.182 22:13:08 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:26.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:26.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:21:26.182 00:21:26.182 --- 10.0.0.2 ping statistics --- 00:21:26.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.182 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:21:26.182 22:13:08 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:26.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:26.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:21:26.182 00:21:26.182 --- 10.0.0.1 ping statistics --- 00:21:26.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.182 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:21:26.182 22:13:08 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:26.182 22:13:08 -- nvmf/common.sh@411 -- # return 0 00:21:26.182 22:13:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:26.182 22:13:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:26.182 22:13:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:26.182 22:13:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:26.182 22:13:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:26.182 22:13:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:26.182 22:13:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:26.182 22:13:08 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:26.182 22:13:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:26.182 22:13:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:26.182 22:13:08 -- common/autotest_common.sh@10 -- # set +x 00:21:26.182 22:13:08 -- nvmf/common.sh@470 -- # nvmfpid=4007623 00:21:26.182 22:13:08 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:26.182 22:13:08 -- nvmf/common.sh@471 -- # waitforlisten 4007623 00:21:26.182 22:13:08 -- common/autotest_common.sh@817 -- # '[' -z 4007623 ']' 00:21:26.182 22:13:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.182 22:13:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:26.182 22:13:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.182 22:13:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:26.182 22:13:08 -- common/autotest_common.sh@10 -- # set +x 00:21:26.182 [2024-04-24 22:13:08.269521] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:21:26.182 [2024-04-24 22:13:08.269607] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.182 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.182 [2024-04-24 22:13:08.346921] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.442 [2024-04-24 22:13:08.466637] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:26.442 [2024-04-24 22:13:08.466698] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:26.442 [2024-04-24 22:13:08.466715] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:26.442 [2024-04-24 22:13:08.466729] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:26.442 [2024-04-24 22:13:08.466741] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:26.442 [2024-04-24 22:13:08.466775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.442 22:13:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:26.442 22:13:08 -- common/autotest_common.sh@850 -- # return 0 00:21:26.442 22:13:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:26.442 22:13:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:26.442 22:13:08 -- common/autotest_common.sh@10 -- # set +x 00:21:26.442 22:13:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.442 22:13:08 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:26.442 22:13:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.442 22:13:08 -- common/autotest_common.sh@10 -- # set +x 00:21:26.442 [2024-04-24 22:13:08.624692] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:26.442 22:13:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:26.442 22:13:08 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:26.442 22:13:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.442 22:13:08 -- common/autotest_common.sh@10 -- # set +x 00:21:26.442 [2024-04-24 22:13:08.632652] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:26.442 [2024-04-24 22:13:08.632969] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:26.442 22:13:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:26.442 22:13:08 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:26.442 22:13:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.442 22:13:08 -- common/autotest_common.sh@10 -- # set +x 00:21:26.442 null0 00:21:26.442 22:13:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:26.442 22:13:08 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:26.442 22:13:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.442 22:13:08 -- common/autotest_common.sh@10 -- # set +x 00:21:26.442 null1 00:21:26.442 22:13:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:26.442 22:13:08 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:26.442 22:13:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.442 22:13:08 -- common/autotest_common.sh@10 -- # set +x 00:21:26.442 22:13:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:26.442 22:13:08 -- host/discovery.sh@45 -- # hostpid=4007763 00:21:26.442 22:13:08 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:26.442 22:13:08 -- host/discovery.sh@46 -- # waitforlisten 4007763 /tmp/host.sock 00:21:26.442 22:13:08 -- common/autotest_common.sh@817 -- # '[' -z 4007763 ']' 00:21:26.442 22:13:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:21:26.442 22:13:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:26.442 22:13:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:26.442 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:26.442 22:13:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:26.442 22:13:08 -- common/autotest_common.sh@10 -- # set +x 00:21:26.700 [2024-04-24 22:13:08.709227] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:21:26.700 [2024-04-24 22:13:08.709305] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4007763 ] 00:21:26.700 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.700 [2024-04-24 22:13:08.776942] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.700 [2024-04-24 22:13:08.899242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.266 22:13:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:27.266 22:13:09 -- common/autotest_common.sh@850 -- # return 0 00:21:27.266 22:13:09 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:27.266 22:13:09 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:27.266 22:13:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.266 22:13:09 -- common/autotest_common.sh@10 -- # set +x 00:21:27.266 22:13:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.266 22:13:09 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:27.266 22:13:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.266 22:13:09 -- common/autotest_common.sh@10 -- # set +x 00:21:27.266 22:13:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.266 22:13:09 -- host/discovery.sh@72 -- # notify_id=0 00:21:27.266 22:13:09 -- host/discovery.sh@83 -- # get_subsystem_names 00:21:27.266 22:13:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:27.266 22:13:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:27.267 22:13:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.267 22:13:09 -- common/autotest_common.sh@10 -- # set +x 00:21:27.267 22:13:09 -- host/discovery.sh@59 -- # sort 00:21:27.267 22:13:09 -- host/discovery.sh@59 -- # xargs 00:21:27.267 22:13:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.267 22:13:09 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:27.267 22:13:09 -- host/discovery.sh@84 -- # get_bdev_list 00:21:27.267 22:13:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:27.267 22:13:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:27.267 22:13:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.267 22:13:09 -- common/autotest_common.sh@10 -- # set +x 00:21:27.267 22:13:09 -- host/discovery.sh@55 -- # sort 00:21:27.267 22:13:09 -- host/discovery.sh@55 -- # xargs 00:21:27.267 22:13:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.267 22:13:09 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:27.267 22:13:09 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:27.267 22:13:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.267 22:13:09 -- common/autotest_common.sh@10 -- # set +x 00:21:27.267 22:13:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.267 22:13:09 -- host/discovery.sh@87 -- # get_subsystem_names 00:21:27.267 22:13:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:27.267 22:13:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:27.267 22:13:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.267 22:13:09 -- common/autotest_common.sh@10 -- # set +x 00:21:27.267 22:13:09 -- host/discovery.sh@59 -- # sort 00:21:27.267 22:13:09 -- host/discovery.sh@59 -- # xargs 00:21:27.267 22:13:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.267 22:13:09 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:27.267 22:13:09 -- host/discovery.sh@88 -- # get_bdev_list 00:21:27.267 22:13:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:27.267 22:13:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.267 22:13:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:27.267 22:13:09 -- common/autotest_common.sh@10 -- # set +x 00:21:27.267 22:13:09 -- host/discovery.sh@55 -- # sort 00:21:27.267 22:13:09 -- host/discovery.sh@55 -- # xargs 00:21:27.267 22:13:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.267 22:13:09 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:27.267 22:13:09 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:27.267 22:13:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.267 22:13:09 -- common/autotest_common.sh@10 -- # set +x 00:21:27.267 22:13:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.267 22:13:09 -- host/discovery.sh@91 -- # get_subsystem_names 00:21:27.267 22:13:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:27.267 22:13:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.267 22:13:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:27.267 22:13:09 -- common/autotest_common.sh@10 -- # set +x 00:21:27.267 22:13:09 -- host/discovery.sh@59 -- # sort 00:21:27.267 22:13:09 -- host/discovery.sh@59 -- # xargs 00:21:27.267 22:13:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.267 22:13:09 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:27.267 22:13:09 -- host/discovery.sh@92 -- # get_bdev_list 00:21:27.267 22:13:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:27.267 22:13:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:27.267 22:13:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.267 22:13:09 -- host/discovery.sh@55 -- # sort 00:21:27.267 22:13:09 -- common/autotest_common.sh@10 -- # set +x 00:21:27.267 22:13:09 -- host/discovery.sh@55 -- # xargs 00:21:27.267 22:13:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.525 22:13:09 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:27.525 22:13:09 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:27.525 22:13:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.525 22:13:09 -- common/autotest_common.sh@10 -- # set +x 00:21:27.525 [2024-04-24 22:13:09.559359] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:27.525 22:13:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.525 22:13:09 -- host/discovery.sh@97 -- # get_subsystem_names 00:21:27.525 22:13:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:27.525 22:13:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.525 22:13:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:27.525 22:13:09 -- common/autotest_common.sh@10 -- # set +x 00:21:27.525 22:13:09 -- host/discovery.sh@59 -- # sort 00:21:27.525 22:13:09 -- host/discovery.sh@59 -- # xargs 00:21:27.525 22:13:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.525 22:13:09 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:27.525 22:13:09 -- host/discovery.sh@98 -- # get_bdev_list 00:21:27.525 22:13:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:27.525 22:13:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:27.525 22:13:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.525 22:13:09 -- common/autotest_common.sh@10 -- # set +x 00:21:27.525 22:13:09 -- host/discovery.sh@55 -- # sort 00:21:27.525 22:13:09 -- host/discovery.sh@55 -- # xargs 00:21:27.525 22:13:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.525 22:13:09 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:27.525 22:13:09 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:27.525 22:13:09 -- host/discovery.sh@79 -- # expected_count=0 00:21:27.525 22:13:09 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:27.525 22:13:09 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:27.525 22:13:09 -- common/autotest_common.sh@901 -- # local max=10 00:21:27.525 22:13:09 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:27.525 22:13:09 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:27.525 22:13:09 -- common/autotest_common.sh@903 -- # get_notification_count 00:21:27.525 22:13:09 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:27.525 22:13:09 -- host/discovery.sh@74 -- # jq '. | length' 00:21:27.525 22:13:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.525 22:13:09 -- common/autotest_common.sh@10 -- # set +x 00:21:27.525 22:13:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.525 22:13:09 -- host/discovery.sh@74 -- # notification_count=0 00:21:27.525 22:13:09 -- host/discovery.sh@75 -- # notify_id=0 00:21:27.525 22:13:09 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:21:27.525 22:13:09 -- common/autotest_common.sh@904 -- # return 0 00:21:27.525 22:13:09 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:27.525 22:13:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.525 22:13:09 -- common/autotest_common.sh@10 -- # set +x 00:21:27.525 22:13:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.525 22:13:09 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:27.525 22:13:09 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:27.525 22:13:09 -- common/autotest_common.sh@901 -- # local max=10 00:21:27.525 22:13:09 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:27.525 22:13:09 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:27.525 22:13:09 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:21:27.525 22:13:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:27.525 22:13:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.525 22:13:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:27.525 22:13:09 -- common/autotest_common.sh@10 -- # set +x 00:21:27.525 22:13:09 -- host/discovery.sh@59 -- # sort 00:21:27.783 22:13:09 -- host/discovery.sh@59 -- # xargs 00:21:27.784 22:13:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.784 22:13:09 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:21:27.784 22:13:09 -- common/autotest_common.sh@906 -- # sleep 1 00:21:28.349 [2024-04-24 22:13:10.313337] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:28.349 [2024-04-24 22:13:10.313378] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:28.349 [2024-04-24 22:13:10.313412] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:28.349 [2024-04-24 22:13:10.443843] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:28.349 [2024-04-24 22:13:10.503879] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:28.349 [2024-04-24 22:13:10.503908] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:28.606 22:13:10 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:28.606 22:13:10 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:28.606 22:13:10 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:21:28.606 22:13:10 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:28.606 22:13:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:28.606 22:13:10 -- common/autotest_common.sh@10 -- # set +x 00:21:28.606 22:13:10 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:28.606 22:13:10 -- host/discovery.sh@59 -- # sort 00:21:28.606 22:13:10 -- host/discovery.sh@59 -- # xargs 00:21:28.606 22:13:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:28.864 22:13:10 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.864 22:13:10 -- common/autotest_common.sh@904 -- # return 0 00:21:28.864 22:13:10 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:28.864 22:13:10 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:28.864 22:13:10 -- common/autotest_common.sh@901 -- # local max=10 00:21:28.864 22:13:10 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:28.864 22:13:10 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:28.864 22:13:10 -- common/autotest_common.sh@903 -- # get_bdev_list 00:21:28.864 22:13:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:28.864 22:13:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:28.864 22:13:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:28.864 22:13:10 -- common/autotest_common.sh@10 -- # set +x 00:21:28.864 22:13:10 -- host/discovery.sh@55 -- # sort 00:21:28.864 22:13:10 -- host/discovery.sh@55 -- # xargs 00:21:28.864 22:13:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:28.864 22:13:10 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:28.864 22:13:10 -- common/autotest_common.sh@904 -- # return 0 00:21:28.864 22:13:10 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:28.864 22:13:10 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:28.864 22:13:10 -- common/autotest_common.sh@901 -- # local max=10 00:21:28.864 22:13:10 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:28.864 22:13:10 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:28.864 22:13:10 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:21:28.864 22:13:10 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:28.864 22:13:10 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:28.864 22:13:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:28.864 22:13:10 -- common/autotest_common.sh@10 -- # set +x 00:21:28.864 22:13:10 -- host/discovery.sh@63 -- # sort -n 00:21:28.864 22:13:10 -- host/discovery.sh@63 -- # xargs 00:21:28.864 22:13:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:28.864 22:13:10 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:21:28.864 22:13:10 -- common/autotest_common.sh@904 -- # return 0 00:21:28.864 22:13:10 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:28.864 22:13:10 -- host/discovery.sh@79 -- # expected_count=1 00:21:28.864 22:13:10 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:28.864 22:13:10 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:28.864 22:13:10 -- common/autotest_common.sh@901 -- # local max=10 00:21:28.864 22:13:10 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:28.864 22:13:10 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:28.864 22:13:10 -- common/autotest_common.sh@903 -- # get_notification_count 00:21:28.864 22:13:10 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:28.864 22:13:10 -- host/discovery.sh@74 -- # jq '. | length' 00:21:28.864 22:13:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:28.864 22:13:10 -- common/autotest_common.sh@10 -- # set +x 00:21:28.864 22:13:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:28.864 22:13:11 -- host/discovery.sh@74 -- # notification_count=1 00:21:28.864 22:13:11 -- host/discovery.sh@75 -- # notify_id=1 00:21:28.864 22:13:11 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:21:28.864 22:13:11 -- common/autotest_common.sh@904 -- # return 0 00:21:28.864 22:13:11 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:28.864 22:13:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:28.864 22:13:11 -- common/autotest_common.sh@10 -- # set +x 00:21:28.864 22:13:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:28.864 22:13:11 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:28.864 22:13:11 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:28.864 22:13:11 -- common/autotest_common.sh@901 -- # local max=10 00:21:28.864 22:13:11 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:28.864 22:13:11 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:28.864 22:13:11 -- common/autotest_common.sh@903 -- # get_bdev_list 00:21:28.864 22:13:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:28.864 22:13:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:28.865 22:13:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:28.865 22:13:11 -- common/autotest_common.sh@10 -- # set +x 00:21:28.865 22:13:11 -- host/discovery.sh@55 -- # sort 00:21:28.865 22:13:11 -- host/discovery.sh@55 -- # xargs 00:21:29.122 22:13:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.122 22:13:11 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:29.122 22:13:11 -- common/autotest_common.sh@904 -- # return 0 00:21:29.122 22:13:11 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:29.122 22:13:11 -- host/discovery.sh@79 -- # expected_count=1 00:21:29.122 22:13:11 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:29.122 22:13:11 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:29.122 22:13:11 -- common/autotest_common.sh@901 -- # local max=10 00:21:29.122 22:13:11 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:29.122 22:13:11 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:29.122 22:13:11 -- common/autotest_common.sh@903 -- # get_notification_count 00:21:29.122 22:13:11 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:29.122 22:13:11 -- host/discovery.sh@74 -- # jq '. | length' 00:21:29.122 22:13:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.122 22:13:11 -- common/autotest_common.sh@10 -- # set +x 00:21:29.122 22:13:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.122 22:13:11 -- host/discovery.sh@74 -- # notification_count=1 00:21:29.122 22:13:11 -- host/discovery.sh@75 -- # notify_id=2 00:21:29.122 22:13:11 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:21:29.122 22:13:11 -- common/autotest_common.sh@904 -- # return 0 00:21:29.122 22:13:11 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:29.122 22:13:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.122 22:13:11 -- common/autotest_common.sh@10 -- # set +x 00:21:29.122 [2024-04-24 22:13:11.348675] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:29.122 [2024-04-24 22:13:11.349165] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:29.122 [2024-04-24 22:13:11.349210] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:29.122 22:13:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.122 22:13:11 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:29.122 22:13:11 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:29.122 22:13:11 -- common/autotest_common.sh@901 -- # local max=10 00:21:29.122 22:13:11 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:29.122 22:13:11 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:29.122 22:13:11 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:21:29.122 22:13:11 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:29.122 22:13:11 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:29.122 22:13:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.122 22:13:11 -- common/autotest_common.sh@10 -- # set +x 00:21:29.122 22:13:11 -- host/discovery.sh@59 -- # sort 00:21:29.122 22:13:11 -- host/discovery.sh@59 -- # xargs 00:21:29.122 22:13:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.380 22:13:11 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.380 22:13:11 -- common/autotest_common.sh@904 -- # return 0 00:21:29.380 22:13:11 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:29.380 22:13:11 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:29.380 22:13:11 -- common/autotest_common.sh@901 -- # local max=10 00:21:29.380 22:13:11 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:29.380 22:13:11 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:29.380 22:13:11 -- common/autotest_common.sh@903 -- # get_bdev_list 00:21:29.380 22:13:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:29.380 22:13:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:29.380 22:13:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.380 22:13:11 -- common/autotest_common.sh@10 -- # set +x 00:21:29.380 22:13:11 -- host/discovery.sh@55 -- # sort 00:21:29.380 22:13:11 -- host/discovery.sh@55 -- # xargs 00:21:29.380 22:13:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.380 22:13:11 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:29.380 22:13:11 -- common/autotest_common.sh@904 -- # return 0 00:21:29.380 22:13:11 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:29.380 22:13:11 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:29.380 22:13:11 -- common/autotest_common.sh@901 -- # local max=10 00:21:29.380 22:13:11 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:29.380 22:13:11 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:29.380 22:13:11 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:21:29.380 22:13:11 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:29.380 22:13:11 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:29.380 22:13:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.380 22:13:11 -- common/autotest_common.sh@10 -- # set +x 00:21:29.380 22:13:11 -- host/discovery.sh@63 -- # sort -n 00:21:29.380 22:13:11 -- host/discovery.sh@63 -- # xargs 00:21:29.380 22:13:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.380 [2024-04-24 22:13:11.476974] bdev_nvme.c:6830:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:29.380 22:13:11 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:29.380 22:13:11 -- common/autotest_common.sh@906 -- # sleep 1 00:21:29.637 [2024-04-24 22:13:11.739220] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:29.637 [2024-04-24 22:13:11.739247] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:29.637 [2024-04-24 22:13:11.739257] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:30.572 22:13:12 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:30.572 22:13:12 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:30.572 22:13:12 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:21:30.572 22:13:12 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:30.572 22:13:12 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:30.572 22:13:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.572 22:13:12 -- common/autotest_common.sh@10 -- # set +x 00:21:30.572 22:13:12 -- host/discovery.sh@63 -- # sort -n 00:21:30.572 22:13:12 -- host/discovery.sh@63 -- # xargs 00:21:30.572 22:13:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.572 22:13:12 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:30.572 22:13:12 -- common/autotest_common.sh@904 -- # return 0 00:21:30.572 22:13:12 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:30.572 22:13:12 -- host/discovery.sh@79 -- # expected_count=0 00:21:30.572 22:13:12 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:30.572 22:13:12 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:30.572 22:13:12 -- common/autotest_common.sh@901 -- # local max=10 00:21:30.572 22:13:12 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:30.572 22:13:12 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:30.572 22:13:12 -- common/autotest_common.sh@903 -- # get_notification_count 00:21:30.572 22:13:12 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:30.572 22:13:12 -- host/discovery.sh@74 -- # jq '. | length' 00:21:30.572 22:13:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.572 22:13:12 -- common/autotest_common.sh@10 -- # set +x 00:21:30.572 22:13:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.572 22:13:12 -- host/discovery.sh@74 -- # notification_count=0 00:21:30.572 22:13:12 -- host/discovery.sh@75 -- # notify_id=2 00:21:30.572 22:13:12 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:21:30.572 22:13:12 -- common/autotest_common.sh@904 -- # return 0 00:21:30.572 22:13:12 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:30.572 22:13:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.572 22:13:12 -- common/autotest_common.sh@10 -- # set +x 00:21:30.572 [2024-04-24 22:13:12.604881] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:30.572 [2024-04-24 22:13:12.604918] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:30.572 [2024-04-24 22:13:12.607225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.572 [2024-04-24 22:13:12.607260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.572 [2024-04-24 22:13:12.607288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.572 [2024-04-24 22:13:12.607305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.572 [2024-04-24 22:13:12.607320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.572 [2024-04-24 22:13:12.607335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.572 [2024-04-24 22:13:12.607357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.572 [2024-04-24 22:13:12.607372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.572 [2024-04-24 22:13:12.607387] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b4230 is same with the state(5) to be set 00:21:30.572 22:13:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.572 22:13:12 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:30.572 22:13:12 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:30.572 22:13:12 -- common/autotest_common.sh@901 -- # local max=10 00:21:30.572 22:13:12 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:30.572 22:13:12 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:30.572 22:13:12 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:21:30.572 22:13:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:30.572 22:13:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:30.572 22:13:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.572 22:13:12 -- common/autotest_common.sh@10 -- # set +x 00:21:30.572 22:13:12 -- host/discovery.sh@59 -- # sort 00:21:30.572 22:13:12 -- host/discovery.sh@59 -- # xargs 00:21:30.572 [2024-04-24 22:13:12.617225] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b4230 (9): Bad file descriptor 00:21:30.572 22:13:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.572 [2024-04-24 22:13:12.627272] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:30.572 [2024-04-24 22:13:12.627545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.572 [2024-04-24 22:13:12.627723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.572 [2024-04-24 22:13:12.627751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b4230 with addr=10.0.0.2, port=4420 00:21:30.572 [2024-04-24 22:13:12.627770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b4230 is same with the state(5) to be set 00:21:30.572 [2024-04-24 22:13:12.627795] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b4230 (9): Bad file descriptor 00:21:30.572 [2024-04-24 22:13:12.627817] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:30.572 [2024-04-24 22:13:12.627832] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:30.572 [2024-04-24 22:13:12.627849] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:30.572 [2024-04-24 22:13:12.627877] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:30.572 [2024-04-24 22:13:12.637352] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:30.572 [2024-04-24 22:13:12.637567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.572 [2024-04-24 22:13:12.637758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.572 [2024-04-24 22:13:12.637786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b4230 with addr=10.0.0.2, port=4420 00:21:30.572 [2024-04-24 22:13:12.637804] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b4230 is same with the state(5) to be set 00:21:30.572 [2024-04-24 22:13:12.637827] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b4230 (9): Bad file descriptor 00:21:30.572 [2024-04-24 22:13:12.637849] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:30.572 [2024-04-24 22:13:12.637864] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:30.572 [2024-04-24 22:13:12.637878] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:30.572 [2024-04-24 22:13:12.637902] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:30.572 [2024-04-24 22:13:12.647428] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:30.572 [2024-04-24 22:13:12.647687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.572 [2024-04-24 22:13:12.647905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.572 [2024-04-24 22:13:12.647933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b4230 with addr=10.0.0.2, port=4420 00:21:30.572 [2024-04-24 22:13:12.647956] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b4230 is same with the state(5) to be set 00:21:30.572 [2024-04-24 22:13:12.647980] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b4230 (9): Bad file descriptor 00:21:30.572 [2024-04-24 22:13:12.648002] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:30.572 [2024-04-24 22:13:12.648016] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:30.572 [2024-04-24 22:13:12.648030] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:30.572 [2024-04-24 22:13:12.648079] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:30.572 [2024-04-24 22:13:12.657507] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:30.572 [2024-04-24 22:13:12.657751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.572 [2024-04-24 22:13:12.657944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.572 [2024-04-24 22:13:12.657972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b4230 with addr=10.0.0.2, port=4420 00:21:30.572 [2024-04-24 22:13:12.657989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b4230 is same with the state(5) to be set 00:21:30.572 [2024-04-24 22:13:12.658012] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b4230 (9): Bad file descriptor 00:21:30.572 [2024-04-24 22:13:12.658034] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:30.572 [2024-04-24 22:13:12.658048] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:30.572 [2024-04-24 22:13:12.658062] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:30.572 [2024-04-24 22:13:12.658096] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:30.572 [2024-04-24 22:13:12.667583] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:30.572 [2024-04-24 22:13:12.667849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.572 [2024-04-24 22:13:12.668026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.572 [2024-04-24 22:13:12.668053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b4230 with addr=10.0.0.2, port=4420 00:21:30.572 [2024-04-24 22:13:12.668070] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b4230 is same with the state(5) to be set 00:21:30.572 [2024-04-24 22:13:12.668094] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b4230 (9): Bad file descriptor 00:21:30.572 [2024-04-24 22:13:12.668115] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:30.572 [2024-04-24 22:13:12.668129] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:30.572 [2024-04-24 22:13:12.668143] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:30.572 [2024-04-24 22:13:12.668163] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:30.572 22:13:12 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.572 22:13:12 -- common/autotest_common.sh@904 -- # return 0 00:21:30.572 22:13:12 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:30.572 22:13:12 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:30.572 22:13:12 -- common/autotest_common.sh@901 -- # local max=10 00:21:30.572 22:13:12 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:30.572 22:13:12 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:30.572 22:13:12 -- common/autotest_common.sh@903 -- # get_bdev_list 00:21:30.572 22:13:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:30.572 22:13:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:30.572 22:13:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.572 22:13:12 -- common/autotest_common.sh@10 -- # set +x 00:21:30.572 22:13:12 -- host/discovery.sh@55 -- # sort 00:21:30.572 22:13:12 -- host/discovery.sh@55 -- # xargs 00:21:30.572 [2024-04-24 22:13:12.677670] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:30.572 [2024-04-24 22:13:12.677909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.572 [2024-04-24 22:13:12.678078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.572 [2024-04-24 22:13:12.678106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b4230 with addr=10.0.0.2, port=4420 00:21:30.572 [2024-04-24 22:13:12.678123] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b4230 is same with the state(5) to be set 00:21:30.572 [2024-04-24 22:13:12.678146] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b4230 (9): Bad file descriptor 00:21:30.572 [2024-04-24 22:13:12.678976] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:30.572 [2024-04-24 22:13:12.679002] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:30.572 [2024-04-24 22:13:12.679018] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:30.572 [2024-04-24 22:13:12.679052] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:30.572 [2024-04-24 22:13:12.687745] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:30.572 [2024-04-24 22:13:12.687968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.572 [2024-04-24 22:13:12.688136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.572 [2024-04-24 22:13:12.688163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b4230 with addr=10.0.0.2, port=4420 00:21:30.572 [2024-04-24 22:13:12.688180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b4230 is same with the state(5) to be set 00:21:30.572 [2024-04-24 22:13:12.688204] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b4230 (9): Bad file descriptor 00:21:30.572 [2024-04-24 22:13:12.688240] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:30.572 [2024-04-24 22:13:12.688259] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:30.572 [2024-04-24 22:13:12.688273] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:30.572 [2024-04-24 22:13:12.688293] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:30.572 [2024-04-24 22:13:12.691240] bdev_nvme.c:6693:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:30.572 [2024-04-24 22:13:12.691274] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:30.572 22:13:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.572 22:13:12 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:30.572 22:13:12 -- common/autotest_common.sh@904 -- # return 0 00:21:30.572 22:13:12 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:30.572 22:13:12 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:30.572 22:13:12 -- common/autotest_common.sh@901 -- # local max=10 00:21:30.572 22:13:12 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:30.572 22:13:12 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:30.572 22:13:12 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:21:30.572 22:13:12 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:30.572 22:13:12 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:30.572 22:13:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.572 22:13:12 -- host/discovery.sh@63 -- # sort -n 00:21:30.572 22:13:12 -- common/autotest_common.sh@10 -- # set +x 00:21:30.572 22:13:12 -- host/discovery.sh@63 -- # xargs 00:21:30.572 22:13:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.572 22:13:12 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:21:30.572 22:13:12 -- common/autotest_common.sh@904 -- # return 0 00:21:30.572 22:13:12 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:30.572 22:13:12 -- host/discovery.sh@79 -- # expected_count=0 00:21:30.572 22:13:12 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:30.572 22:13:12 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:30.572 22:13:12 -- common/autotest_common.sh@901 -- # local max=10 00:21:30.572 22:13:12 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:30.572 22:13:12 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:30.572 22:13:12 -- common/autotest_common.sh@903 -- # get_notification_count 00:21:30.572 22:13:12 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:30.573 22:13:12 -- host/discovery.sh@74 -- # jq '. | length' 00:21:30.573 22:13:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.573 22:13:12 -- common/autotest_common.sh@10 -- # set +x 00:21:30.573 22:13:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.830 22:13:12 -- host/discovery.sh@74 -- # notification_count=0 00:21:30.830 22:13:12 -- host/discovery.sh@75 -- # notify_id=2 00:21:30.830 22:13:12 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:21:30.830 22:13:12 -- common/autotest_common.sh@904 -- # return 0 00:21:30.831 22:13:12 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:30.831 22:13:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.831 22:13:12 -- common/autotest_common.sh@10 -- # set +x 00:21:30.831 22:13:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.831 22:13:12 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:30.831 22:13:12 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:30.831 22:13:12 -- common/autotest_common.sh@901 -- # local max=10 00:21:30.831 22:13:12 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:30.831 22:13:12 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:30.831 22:13:12 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:21:30.831 22:13:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:30.831 22:13:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:30.831 22:13:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.831 22:13:12 -- common/autotest_common.sh@10 -- # set +x 00:21:30.831 22:13:12 -- host/discovery.sh@59 -- # sort 00:21:30.831 22:13:12 -- host/discovery.sh@59 -- # xargs 00:21:30.831 22:13:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.831 22:13:12 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:21:30.831 22:13:12 -- common/autotest_common.sh@904 -- # return 0 00:21:30.831 22:13:12 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:30.831 22:13:12 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:30.831 22:13:12 -- common/autotest_common.sh@901 -- # local max=10 00:21:30.831 22:13:12 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:30.831 22:13:12 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:30.831 22:13:12 -- common/autotest_common.sh@903 -- # get_bdev_list 00:21:30.831 22:13:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:30.831 22:13:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:30.831 22:13:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.831 22:13:12 -- common/autotest_common.sh@10 -- # set +x 00:21:30.831 22:13:12 -- host/discovery.sh@55 -- # sort 00:21:30.831 22:13:12 -- host/discovery.sh@55 -- # xargs 00:21:30.831 22:13:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.831 22:13:12 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:21:30.831 22:13:12 -- common/autotest_common.sh@904 -- # return 0 00:21:30.831 22:13:12 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:30.831 22:13:12 -- host/discovery.sh@79 -- # expected_count=2 00:21:30.831 22:13:12 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:30.831 22:13:12 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:30.831 22:13:12 -- common/autotest_common.sh@901 -- # local max=10 00:21:30.831 22:13:12 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:30.831 22:13:12 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:30.831 22:13:12 -- common/autotest_common.sh@903 -- # get_notification_count 00:21:30.831 22:13:12 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:30.831 22:13:12 -- host/discovery.sh@74 -- # jq '. | length' 00:21:30.831 22:13:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.831 22:13:12 -- common/autotest_common.sh@10 -- # set +x 00:21:30.831 22:13:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.831 22:13:12 -- host/discovery.sh@74 -- # notification_count=2 00:21:30.831 22:13:12 -- host/discovery.sh@75 -- # notify_id=4 00:21:30.831 22:13:12 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:21:30.831 22:13:12 -- common/autotest_common.sh@904 -- # return 0 00:21:30.831 22:13:12 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:30.831 22:13:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.831 22:13:12 -- common/autotest_common.sh@10 -- # set +x 00:21:31.763 [2024-04-24 22:13:14.012435] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:31.763 [2024-04-24 22:13:14.012460] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:31.763 [2024-04-24 22:13:14.012484] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:32.021 [2024-04-24 22:13:14.138900] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:32.279 [2024-04-24 22:13:14.452121] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:32.279 [2024-04-24 22:13:14.452160] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:32.279 22:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.279 22:13:14 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:32.279 22:13:14 -- common/autotest_common.sh@638 -- # local es=0 00:21:32.279 22:13:14 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:32.279 22:13:14 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:32.279 22:13:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:32.279 22:13:14 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:32.279 22:13:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:32.279 22:13:14 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:32.279 22:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.279 22:13:14 -- common/autotest_common.sh@10 -- # set +x 00:21:32.279 request: 00:21:32.279 { 00:21:32.279 "name": "nvme", 00:21:32.279 "trtype": "tcp", 00:21:32.279 "traddr": "10.0.0.2", 00:21:32.279 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:32.279 "adrfam": "ipv4", 00:21:32.279 "trsvcid": "8009", 00:21:32.279 "wait_for_attach": true, 00:21:32.279 "method": "bdev_nvme_start_discovery", 00:21:32.279 "req_id": 1 00:21:32.279 } 00:21:32.279 Got JSON-RPC error response 00:21:32.279 response: 00:21:32.279 { 00:21:32.279 "code": -17, 00:21:32.279 "message": "File exists" 00:21:32.279 } 00:21:32.279 22:13:14 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:32.279 22:13:14 -- common/autotest_common.sh@641 -- # es=1 00:21:32.279 22:13:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:32.279 22:13:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:32.279 22:13:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:32.279 22:13:14 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:32.279 22:13:14 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:32.279 22:13:14 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:32.279 22:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.279 22:13:14 -- common/autotest_common.sh@10 -- # set +x 00:21:32.279 22:13:14 -- host/discovery.sh@67 -- # sort 00:21:32.279 22:13:14 -- host/discovery.sh@67 -- # xargs 00:21:32.279 22:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.582 22:13:14 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:32.582 22:13:14 -- host/discovery.sh@146 -- # get_bdev_list 00:21:32.582 22:13:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:32.582 22:13:14 -- host/discovery.sh@55 -- # sort 00:21:32.582 22:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.582 22:13:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:32.582 22:13:14 -- common/autotest_common.sh@10 -- # set +x 00:21:32.582 22:13:14 -- host/discovery.sh@55 -- # xargs 00:21:32.582 22:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.582 22:13:14 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:32.582 22:13:14 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:32.582 22:13:14 -- common/autotest_common.sh@638 -- # local es=0 00:21:32.582 22:13:14 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:32.582 22:13:14 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:32.582 22:13:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:32.582 22:13:14 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:32.582 22:13:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:32.582 22:13:14 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:32.582 22:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.582 22:13:14 -- common/autotest_common.sh@10 -- # set +x 00:21:32.582 request: 00:21:32.582 { 00:21:32.582 "name": "nvme_second", 00:21:32.582 "trtype": "tcp", 00:21:32.582 "traddr": "10.0.0.2", 00:21:32.582 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:32.582 "adrfam": "ipv4", 00:21:32.582 "trsvcid": "8009", 00:21:32.582 "wait_for_attach": true, 00:21:32.582 "method": "bdev_nvme_start_discovery", 00:21:32.582 "req_id": 1 00:21:32.582 } 00:21:32.582 Got JSON-RPC error response 00:21:32.582 response: 00:21:32.582 { 00:21:32.582 "code": -17, 00:21:32.582 "message": "File exists" 00:21:32.582 } 00:21:32.582 22:13:14 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:32.582 22:13:14 -- common/autotest_common.sh@641 -- # es=1 00:21:32.582 22:13:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:32.582 22:13:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:32.582 22:13:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:32.582 22:13:14 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:32.582 22:13:14 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:32.582 22:13:14 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:32.583 22:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.583 22:13:14 -- common/autotest_common.sh@10 -- # set +x 00:21:32.583 22:13:14 -- host/discovery.sh@67 -- # sort 00:21:32.583 22:13:14 -- host/discovery.sh@67 -- # xargs 00:21:32.583 22:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.583 22:13:14 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:32.583 22:13:14 -- host/discovery.sh@152 -- # get_bdev_list 00:21:32.583 22:13:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:32.583 22:13:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:32.583 22:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.583 22:13:14 -- host/discovery.sh@55 -- # sort 00:21:32.583 22:13:14 -- common/autotest_common.sh@10 -- # set +x 00:21:32.583 22:13:14 -- host/discovery.sh@55 -- # xargs 00:21:32.583 22:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.583 22:13:14 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:32.583 22:13:14 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:32.583 22:13:14 -- common/autotest_common.sh@638 -- # local es=0 00:21:32.583 22:13:14 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:32.583 22:13:14 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:32.583 22:13:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:32.583 22:13:14 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:32.583 22:13:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:32.583 22:13:14 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:32.583 22:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.863 22:13:14 -- common/autotest_common.sh@10 -- # set +x 00:21:33.796 [2024-04-24 22:13:15.812504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.796 [2024-04-24 22:13:15.812730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.796 [2024-04-24 22:13:15.812760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e03d0 with addr=10.0.0.2, port=8010 00:21:33.796 [2024-04-24 22:13:15.812788] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:33.796 [2024-04-24 22:13:15.812805] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:33.796 [2024-04-24 22:13:15.812819] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:34.730 [2024-04-24 22:13:16.814977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.730 [2024-04-24 22:13:16.815229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.730 [2024-04-24 22:13:16.815258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e03d0 with addr=10.0.0.2, port=8010 00:21:34.730 [2024-04-24 22:13:16.815279] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:34.730 [2024-04-24 22:13:16.815293] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:34.730 [2024-04-24 22:13:16.815306] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:35.665 [2024-04-24 22:13:17.817091] bdev_nvme.c:6949:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:35.665 request: 00:21:35.665 { 00:21:35.665 "name": "nvme_second", 00:21:35.665 "trtype": "tcp", 00:21:35.665 "traddr": "10.0.0.2", 00:21:35.665 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:35.665 "adrfam": "ipv4", 00:21:35.665 "trsvcid": "8010", 00:21:35.665 "attach_timeout_ms": 3000, 00:21:35.665 "method": "bdev_nvme_start_discovery", 00:21:35.665 "req_id": 1 00:21:35.665 } 00:21:35.665 Got JSON-RPC error response 00:21:35.665 response: 00:21:35.665 { 00:21:35.665 "code": -110, 00:21:35.665 "message": "Connection timed out" 00:21:35.665 } 00:21:35.665 22:13:17 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:35.665 22:13:17 -- common/autotest_common.sh@641 -- # es=1 00:21:35.665 22:13:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:35.665 22:13:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:35.665 22:13:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:35.665 22:13:17 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:21:35.665 22:13:17 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:35.665 22:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.665 22:13:17 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:35.665 22:13:17 -- common/autotest_common.sh@10 -- # set +x 00:21:35.665 22:13:17 -- host/discovery.sh@67 -- # sort 00:21:35.665 22:13:17 -- host/discovery.sh@67 -- # xargs 00:21:35.665 22:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.665 22:13:17 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:21:35.665 22:13:17 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:21:35.665 22:13:17 -- host/discovery.sh@161 -- # kill 4007763 00:21:35.665 22:13:17 -- host/discovery.sh@162 -- # nvmftestfini 00:21:35.665 22:13:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:35.665 22:13:17 -- nvmf/common.sh@117 -- # sync 00:21:35.665 22:13:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:35.665 22:13:17 -- nvmf/common.sh@120 -- # set +e 00:21:35.665 22:13:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:35.665 22:13:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:35.665 rmmod nvme_tcp 00:21:35.665 rmmod nvme_fabrics 00:21:35.665 rmmod nvme_keyring 00:21:35.665 22:13:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:35.923 22:13:17 -- nvmf/common.sh@124 -- # set -e 00:21:35.923 22:13:17 -- nvmf/common.sh@125 -- # return 0 00:21:35.923 22:13:17 -- nvmf/common.sh@478 -- # '[' -n 4007623 ']' 00:21:35.923 22:13:17 -- nvmf/common.sh@479 -- # killprocess 4007623 00:21:35.923 22:13:17 -- common/autotest_common.sh@936 -- # '[' -z 4007623 ']' 00:21:35.923 22:13:17 -- common/autotest_common.sh@940 -- # kill -0 4007623 00:21:35.923 22:13:17 -- common/autotest_common.sh@941 -- # uname 00:21:35.923 22:13:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:35.923 22:13:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4007623 00:21:35.923 22:13:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:35.923 22:13:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:35.923 22:13:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4007623' 00:21:35.923 killing process with pid 4007623 00:21:35.923 22:13:17 -- common/autotest_common.sh@955 -- # kill 4007623 00:21:35.923 [2024-04-24 22:13:17.970466] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:35.923 22:13:17 -- common/autotest_common.sh@960 -- # wait 4007623 00:21:36.181 22:13:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:36.181 22:13:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:36.181 22:13:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:36.181 22:13:18 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:36.181 22:13:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:36.181 22:13:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.181 22:13:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:36.181 22:13:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.083 22:13:20 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:38.083 00:21:38.083 real 0m14.646s 00:21:38.083 user 0m21.823s 00:21:38.083 sys 0m3.342s 00:21:38.083 22:13:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:38.083 22:13:20 -- common/autotest_common.sh@10 -- # set +x 00:21:38.083 ************************************ 00:21:38.083 END TEST nvmf_discovery 00:21:38.083 ************************************ 00:21:38.342 22:13:20 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:38.342 22:13:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:38.342 22:13:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:38.342 22:13:20 -- common/autotest_common.sh@10 -- # set +x 00:21:38.342 ************************************ 00:21:38.342 START TEST nvmf_discovery_remove_ifc 00:21:38.342 ************************************ 00:21:38.342 22:13:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:38.342 * Looking for test storage... 00:21:38.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:38.342 22:13:20 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:38.342 22:13:20 -- nvmf/common.sh@7 -- # uname -s 00:21:38.342 22:13:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:38.342 22:13:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:38.342 22:13:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:38.342 22:13:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:38.342 22:13:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:38.342 22:13:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:38.342 22:13:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:38.342 22:13:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:38.342 22:13:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:38.342 22:13:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:38.342 22:13:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:38.342 22:13:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:38.342 22:13:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:38.342 22:13:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:38.342 22:13:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:38.342 22:13:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:38.342 22:13:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:38.342 22:13:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.342 22:13:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.342 22:13:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.342 22:13:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.342 22:13:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.342 22:13:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.342 22:13:20 -- paths/export.sh@5 -- # export PATH 00:21:38.342 22:13:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.342 22:13:20 -- nvmf/common.sh@47 -- # : 0 00:21:38.342 22:13:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:38.342 22:13:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:38.342 22:13:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:38.342 22:13:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:38.342 22:13:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:38.342 22:13:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:38.342 22:13:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:38.342 22:13:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:38.342 22:13:20 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:21:38.342 22:13:20 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:21:38.342 22:13:20 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:21:38.342 22:13:20 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:38.342 22:13:20 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:21:38.342 22:13:20 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:21:38.342 22:13:20 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:21:38.342 22:13:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:38.342 22:13:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:38.342 22:13:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:38.342 22:13:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:38.342 22:13:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:38.342 22:13:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.342 22:13:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:38.342 22:13:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.342 22:13:20 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:38.342 22:13:20 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:38.342 22:13:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:38.342 22:13:20 -- common/autotest_common.sh@10 -- # set +x 00:21:40.871 22:13:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:40.871 22:13:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:40.871 22:13:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:40.871 22:13:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:40.871 22:13:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:40.871 22:13:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:40.871 22:13:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:40.871 22:13:22 -- nvmf/common.sh@295 -- # net_devs=() 00:21:40.871 22:13:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:40.871 22:13:22 -- nvmf/common.sh@296 -- # e810=() 00:21:40.871 22:13:22 -- nvmf/common.sh@296 -- # local -ga e810 00:21:40.871 22:13:22 -- nvmf/common.sh@297 -- # x722=() 00:21:40.871 22:13:22 -- nvmf/common.sh@297 -- # local -ga x722 00:21:40.871 22:13:22 -- nvmf/common.sh@298 -- # mlx=() 00:21:40.871 22:13:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:40.871 22:13:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:40.871 22:13:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:40.871 22:13:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:40.871 22:13:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:40.871 22:13:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:40.871 22:13:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:40.871 22:13:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:40.871 22:13:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:40.871 22:13:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:40.871 22:13:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:40.871 22:13:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:40.871 22:13:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:40.871 22:13:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:40.871 22:13:22 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:40.871 22:13:22 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:40.871 22:13:22 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:40.871 22:13:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:40.871 22:13:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:40.871 22:13:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:40.871 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:40.871 22:13:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:40.871 22:13:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:40.871 22:13:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.871 22:13:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.871 22:13:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:40.871 22:13:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:40.871 22:13:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:40.871 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:40.871 22:13:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:40.871 22:13:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:40.871 22:13:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.871 22:13:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.871 22:13:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:40.871 22:13:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:40.871 22:13:22 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:40.871 22:13:22 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:40.871 22:13:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:40.871 22:13:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.871 22:13:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:40.871 22:13:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.871 22:13:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:40.871 Found net devices under 0000:84:00.0: cvl_0_0 00:21:40.871 22:13:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.871 22:13:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:40.871 22:13:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.871 22:13:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:40.871 22:13:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.871 22:13:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:40.871 Found net devices under 0000:84:00.1: cvl_0_1 00:21:40.871 22:13:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.871 22:13:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:40.871 22:13:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:40.871 22:13:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:40.871 22:13:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:40.871 22:13:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:40.871 22:13:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.871 22:13:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:40.871 22:13:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:40.871 22:13:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:40.871 22:13:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:40.872 22:13:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:40.872 22:13:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:40.872 22:13:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:40.872 22:13:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.872 22:13:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:40.872 22:13:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:40.872 22:13:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:40.872 22:13:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:40.872 22:13:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:40.872 22:13:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:40.872 22:13:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:40.872 22:13:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:40.872 22:13:23 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:40.872 22:13:23 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:40.872 22:13:23 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:40.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:21:40.872 00:21:40.872 --- 10.0.0.2 ping statistics --- 00:21:40.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.872 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:21:40.872 22:13:23 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:40.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:21:40.872 00:21:40.872 --- 10.0.0.1 ping statistics --- 00:21:40.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.872 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:21:40.872 22:13:23 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.872 22:13:23 -- nvmf/common.sh@411 -- # return 0 00:21:40.872 22:13:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:40.872 22:13:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.872 22:13:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:40.872 22:13:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:40.872 22:13:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.872 22:13:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:40.872 22:13:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:40.872 22:13:23 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:21:40.872 22:13:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:40.872 22:13:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:40.872 22:13:23 -- common/autotest_common.sh@10 -- # set +x 00:21:40.872 22:13:23 -- nvmf/common.sh@470 -- # nvmfpid=4010958 00:21:40.872 22:13:23 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:40.872 22:13:23 -- nvmf/common.sh@471 -- # waitforlisten 4010958 00:21:40.872 22:13:23 -- common/autotest_common.sh@817 -- # '[' -z 4010958 ']' 00:21:40.872 22:13:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.872 22:13:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:40.872 22:13:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.872 22:13:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:40.872 22:13:23 -- common/autotest_common.sh@10 -- # set +x 00:21:40.872 [2024-04-24 22:13:23.125966] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:21:40.872 [2024-04-24 22:13:23.126054] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.130 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.130 [2024-04-24 22:13:23.201515] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.130 [2024-04-24 22:13:23.320443] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.130 [2024-04-24 22:13:23.320505] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.130 [2024-04-24 22:13:23.320522] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.130 [2024-04-24 22:13:23.320536] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.130 [2024-04-24 22:13:23.320548] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.130 [2024-04-24 22:13:23.320586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.388 22:13:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:41.388 22:13:23 -- common/autotest_common.sh@850 -- # return 0 00:21:41.388 22:13:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:41.388 22:13:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:41.388 22:13:23 -- common/autotest_common.sh@10 -- # set +x 00:21:41.388 22:13:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.388 22:13:23 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:21:41.388 22:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.388 22:13:23 -- common/autotest_common.sh@10 -- # set +x 00:21:41.388 [2024-04-24 22:13:23.482256] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.388 [2024-04-24 22:13:23.490210] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:41.388 [2024-04-24 22:13:23.490551] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:41.388 null0 00:21:41.388 [2024-04-24 22:13:23.522402] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.388 22:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.388 22:13:23 -- host/discovery_remove_ifc.sh@59 -- # hostpid=4011092 00:21:41.388 22:13:23 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 4011092 /tmp/host.sock 00:21:41.388 22:13:23 -- common/autotest_common.sh@817 -- # '[' -z 4011092 ']' 00:21:41.388 22:13:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:21:41.388 22:13:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:41.388 22:13:23 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:21:41.388 22:13:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:41.388 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:41.388 22:13:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:41.388 22:13:23 -- common/autotest_common.sh@10 -- # set +x 00:21:41.388 [2024-04-24 22:13:23.594582] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:21:41.388 [2024-04-24 22:13:23.594679] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4011092 ] 00:21:41.388 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.647 [2024-04-24 22:13:23.664612] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.647 [2024-04-24 22:13:23.784390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.908 22:13:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:41.908 22:13:23 -- common/autotest_common.sh@850 -- # return 0 00:21:41.908 22:13:23 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:41.908 22:13:23 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:21:41.908 22:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.908 22:13:23 -- common/autotest_common.sh@10 -- # set +x 00:21:41.908 22:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.908 22:13:23 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:21:41.908 22:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.908 22:13:23 -- common/autotest_common.sh@10 -- # set +x 00:21:41.908 22:13:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.908 22:13:24 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:21:41.908 22:13:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.908 22:13:24 -- common/autotest_common.sh@10 -- # set +x 00:21:43.282 [2024-04-24 22:13:25.122298] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:43.282 [2024-04-24 22:13:25.122341] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:43.282 [2024-04-24 22:13:25.122367] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:43.282 [2024-04-24 22:13:25.251804] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:43.282 [2024-04-24 22:13:25.352902] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:43.282 [2024-04-24 22:13:25.352975] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:43.282 [2024-04-24 22:13:25.353022] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:43.282 [2024-04-24 22:13:25.353050] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:43.282 [2024-04-24 22:13:25.353089] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:43.282 22:13:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:43.282 22:13:25 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:21:43.282 22:13:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:43.282 22:13:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:43.282 22:13:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:43.282 22:13:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:43.282 22:13:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:43.282 22:13:25 -- common/autotest_common.sh@10 -- # set +x 00:21:43.282 22:13:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:43.282 [2024-04-24 22:13:25.359027] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6e7a90 was disconnected and freed. delete nvme_qpair. 00:21:43.282 22:13:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:43.282 22:13:25 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:21:43.282 22:13:25 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:21:43.282 22:13:25 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:21:43.282 22:13:25 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:21:43.282 22:13:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:43.282 22:13:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:43.282 22:13:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:43.282 22:13:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:43.282 22:13:25 -- common/autotest_common.sh@10 -- # set +x 00:21:43.282 22:13:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:43.282 22:13:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:43.282 22:13:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:43.282 22:13:25 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:43.282 22:13:25 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:44.657 22:13:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:44.657 22:13:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:44.657 22:13:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:44.657 22:13:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:44.657 22:13:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:44.657 22:13:26 -- common/autotest_common.sh@10 -- # set +x 00:21:44.657 22:13:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:44.657 22:13:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:44.657 22:13:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:44.657 22:13:26 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:45.589 22:13:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:45.589 22:13:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:45.589 22:13:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:45.589 22:13:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.589 22:13:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:45.589 22:13:27 -- common/autotest_common.sh@10 -- # set +x 00:21:45.589 22:13:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:45.589 22:13:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.589 22:13:27 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:45.589 22:13:27 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:46.521 22:13:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:46.521 22:13:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:46.521 22:13:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:46.521 22:13:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:46.521 22:13:28 -- common/autotest_common.sh@10 -- # set +x 00:21:46.521 22:13:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:46.521 22:13:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:46.521 22:13:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:46.521 22:13:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:46.521 22:13:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:47.892 22:13:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:47.892 22:13:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:47.892 22:13:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:47.892 22:13:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.892 22:13:29 -- common/autotest_common.sh@10 -- # set +x 00:21:47.892 22:13:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:47.893 22:13:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:47.893 22:13:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.893 22:13:29 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:47.893 22:13:29 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:48.825 22:13:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:48.825 [2024-04-24 22:13:30.793699] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:21:48.825 [2024-04-24 22:13:30.793777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.825 [2024-04-24 22:13:30.793804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.825 [2024-04-24 22:13:30.793824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.825 [2024-04-24 22:13:30.793841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.825 22:13:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:48.825 [2024-04-24 22:13:30.793856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.825 [2024-04-24 22:13:30.793877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.825 [2024-04-24 22:13:30.793893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.825 [2024-04-24 22:13:30.793916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.825 [2024-04-24 22:13:30.793934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.825 [2024-04-24 22:13:30.793950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.825 [2024-04-24 22:13:30.793964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6adf00 is same with the state(5) to be set 00:21:48.825 22:13:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:48.825 22:13:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:48.825 22:13:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:48.825 22:13:30 -- common/autotest_common.sh@10 -- # set +x 00:21:48.825 22:13:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:48.825 [2024-04-24 22:13:30.803714] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6adf00 (9): Bad file descriptor 00:21:48.825 22:13:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:48.825 [2024-04-24 22:13:30.813763] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:48.825 22:13:30 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:48.825 22:13:30 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:49.758 22:13:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:49.758 22:13:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.758 22:13:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.758 22:13:31 -- common/autotest_common.sh@10 -- # set +x 00:21:49.758 22:13:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:49.758 22:13:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:49.758 22:13:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:49.758 [2024-04-24 22:13:31.853441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:21:50.701 [2024-04-24 22:13:32.877458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:21:50.701 [2024-04-24 22:13:32.877550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6adf00 with addr=10.0.0.2, port=4420 00:21:50.701 [2024-04-24 22:13:32.877582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6adf00 is same with the state(5) to be set 00:21:50.701 [2024-04-24 22:13:32.878118] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6adf00 (9): Bad file descriptor 00:21:50.701 [2024-04-24 22:13:32.878174] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.701 [2024-04-24 22:13:32.878233] bdev_nvme.c:6657:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:21:50.701 [2024-04-24 22:13:32.878279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.701 [2024-04-24 22:13:32.878306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.701 [2024-04-24 22:13:32.878328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.701 [2024-04-24 22:13:32.878353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.701 [2024-04-24 22:13:32.878369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.701 [2024-04-24 22:13:32.878385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.701 [2024-04-24 22:13:32.878420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.701 [2024-04-24 22:13:32.878437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.701 [2024-04-24 22:13:32.878463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.701 [2024-04-24 22:13:32.878479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.701 [2024-04-24 22:13:32.878494] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:21:50.701 [2024-04-24 22:13:32.878625] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ae310 (9): Bad file descriptor 00:21:50.701 [2024-04-24 22:13:32.879649] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:21:50.701 [2024-04-24 22:13:32.879675] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:21:50.701 22:13:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.701 22:13:32 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:50.701 22:13:32 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:52.079 22:13:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:52.079 22:13:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:52.079 22:13:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:52.079 22:13:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.079 22:13:33 -- common/autotest_common.sh@10 -- # set +x 00:21:52.079 22:13:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:52.079 22:13:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:52.079 22:13:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.079 22:13:33 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:21:52.079 22:13:33 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:52.079 22:13:33 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:52.079 22:13:34 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:21:52.079 22:13:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:52.079 22:13:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:52.079 22:13:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:52.079 22:13:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.079 22:13:34 -- common/autotest_common.sh@10 -- # set +x 00:21:52.079 22:13:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:52.079 22:13:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:52.079 22:13:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.079 22:13:34 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:52.079 22:13:34 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:53.048 [2024-04-24 22:13:34.936564] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:53.048 [2024-04-24 22:13:34.936604] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:53.048 [2024-04-24 22:13:34.936632] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:53.048 [2024-04-24 22:13:35.022888] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:21:53.048 22:13:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:53.048 22:13:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:53.048 22:13:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:53.048 22:13:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.048 22:13:35 -- common/autotest_common.sh@10 -- # set +x 00:21:53.048 22:13:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:53.048 22:13:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:53.048 [2024-04-24 22:13:35.126172] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:53.048 [2024-04-24 22:13:35.126225] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:53.048 [2024-04-24 22:13:35.126264] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:53.048 [2024-04-24 22:13:35.126290] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:21:53.048 [2024-04-24 22:13:35.126306] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:53.048 22:13:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.048 [2024-04-24 22:13:35.134674] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6f24a0 was disconnected and freed. delete nvme_qpair. 00:21:53.048 22:13:35 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:53.048 22:13:35 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:53.982 22:13:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:53.982 22:13:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:53.982 22:13:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:53.982 22:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.982 22:13:36 -- common/autotest_common.sh@10 -- # set +x 00:21:53.982 22:13:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:53.982 22:13:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:53.982 22:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.982 22:13:36 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:21:53.982 22:13:36 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:21:53.982 22:13:36 -- host/discovery_remove_ifc.sh@90 -- # killprocess 4011092 00:21:54.241 22:13:36 -- common/autotest_common.sh@936 -- # '[' -z 4011092 ']' 00:21:54.241 22:13:36 -- common/autotest_common.sh@940 -- # kill -0 4011092 00:21:54.241 22:13:36 -- common/autotest_common.sh@941 -- # uname 00:21:54.241 22:13:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:54.241 22:13:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4011092 00:21:54.241 22:13:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:54.241 22:13:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:54.241 22:13:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4011092' 00:21:54.241 killing process with pid 4011092 00:21:54.241 22:13:36 -- common/autotest_common.sh@955 -- # kill 4011092 00:21:54.241 22:13:36 -- common/autotest_common.sh@960 -- # wait 4011092 00:21:54.498 22:13:36 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:21:54.498 22:13:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:54.498 22:13:36 -- nvmf/common.sh@117 -- # sync 00:21:54.498 22:13:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:54.498 22:13:36 -- nvmf/common.sh@120 -- # set +e 00:21:54.498 22:13:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:54.498 22:13:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:54.498 rmmod nvme_tcp 00:21:54.498 rmmod nvme_fabrics 00:21:54.498 rmmod nvme_keyring 00:21:54.498 22:13:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:54.498 22:13:36 -- nvmf/common.sh@124 -- # set -e 00:21:54.498 22:13:36 -- nvmf/common.sh@125 -- # return 0 00:21:54.498 22:13:36 -- nvmf/common.sh@478 -- # '[' -n 4010958 ']' 00:21:54.498 22:13:36 -- nvmf/common.sh@479 -- # killprocess 4010958 00:21:54.498 22:13:36 -- common/autotest_common.sh@936 -- # '[' -z 4010958 ']' 00:21:54.498 22:13:36 -- common/autotest_common.sh@940 -- # kill -0 4010958 00:21:54.498 22:13:36 -- common/autotest_common.sh@941 -- # uname 00:21:54.498 22:13:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:54.498 22:13:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4010958 00:21:54.498 22:13:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:54.498 22:13:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:54.498 22:13:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4010958' 00:21:54.498 killing process with pid 4010958 00:21:54.498 22:13:36 -- common/autotest_common.sh@955 -- # kill 4010958 00:21:54.498 [2024-04-24 22:13:36.629166] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:54.498 22:13:36 -- common/autotest_common.sh@960 -- # wait 4010958 00:21:54.757 22:13:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:54.757 22:13:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:54.757 22:13:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:54.757 22:13:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:54.757 22:13:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:54.757 22:13:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.757 22:13:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:54.757 22:13:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.290 22:13:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:57.290 00:21:57.290 real 0m18.473s 00:21:57.290 user 0m25.686s 00:21:57.290 sys 0m3.423s 00:21:57.290 22:13:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:57.290 22:13:38 -- common/autotest_common.sh@10 -- # set +x 00:21:57.290 ************************************ 00:21:57.290 END TEST nvmf_discovery_remove_ifc 00:21:57.290 ************************************ 00:21:57.290 22:13:38 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:21:57.290 22:13:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:57.290 22:13:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:57.290 22:13:39 -- common/autotest_common.sh@10 -- # set +x 00:21:57.290 ************************************ 00:21:57.290 START TEST nvmf_identify_kernel_target 00:21:57.290 ************************************ 00:21:57.290 22:13:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:21:57.290 * Looking for test storage... 00:21:57.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:57.290 22:13:39 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:57.290 22:13:39 -- nvmf/common.sh@7 -- # uname -s 00:21:57.290 22:13:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.290 22:13:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.290 22:13:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.290 22:13:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.290 22:13:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.290 22:13:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.290 22:13:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.290 22:13:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.290 22:13:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.290 22:13:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.290 22:13:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:57.290 22:13:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:57.290 22:13:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.290 22:13:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.290 22:13:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:57.290 22:13:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:57.290 22:13:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:57.290 22:13:39 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.290 22:13:39 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.290 22:13:39 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.290 22:13:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.290 22:13:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.290 22:13:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.290 22:13:39 -- paths/export.sh@5 -- # export PATH 00:21:57.290 22:13:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.290 22:13:39 -- nvmf/common.sh@47 -- # : 0 00:21:57.290 22:13:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:57.290 22:13:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:57.290 22:13:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:57.290 22:13:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.290 22:13:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.290 22:13:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:57.290 22:13:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:57.290 22:13:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:57.290 22:13:39 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:21:57.290 22:13:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:57.290 22:13:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.290 22:13:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:57.290 22:13:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:57.290 22:13:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:57.290 22:13:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.290 22:13:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:57.290 22:13:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.290 22:13:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:57.290 22:13:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:57.290 22:13:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:57.290 22:13:39 -- common/autotest_common.sh@10 -- # set +x 00:21:59.192 22:13:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:59.192 22:13:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:59.192 22:13:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:59.192 22:13:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:59.192 22:13:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:59.192 22:13:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:59.192 22:13:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:59.192 22:13:41 -- nvmf/common.sh@295 -- # net_devs=() 00:21:59.192 22:13:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:59.192 22:13:41 -- nvmf/common.sh@296 -- # e810=() 00:21:59.192 22:13:41 -- nvmf/common.sh@296 -- # local -ga e810 00:21:59.192 22:13:41 -- nvmf/common.sh@297 -- # x722=() 00:21:59.192 22:13:41 -- nvmf/common.sh@297 -- # local -ga x722 00:21:59.192 22:13:41 -- nvmf/common.sh@298 -- # mlx=() 00:21:59.192 22:13:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:59.192 22:13:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:59.192 22:13:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:59.192 22:13:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:59.192 22:13:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:59.192 22:13:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:59.192 22:13:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:59.192 22:13:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:59.192 22:13:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:59.192 22:13:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:59.192 22:13:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:59.192 22:13:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:59.192 22:13:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:59.192 22:13:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:59.192 22:13:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:59.192 22:13:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:59.192 22:13:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:59.192 22:13:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:59.192 22:13:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:59.192 22:13:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:59.192 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:59.192 22:13:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:59.192 22:13:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:59.192 22:13:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.192 22:13:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.192 22:13:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:59.192 22:13:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:59.192 22:13:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:59.192 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:59.192 22:13:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:59.192 22:13:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:59.192 22:13:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.192 22:13:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.192 22:13:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:59.192 22:13:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:59.192 22:13:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:59.192 22:13:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:59.192 22:13:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:59.192 22:13:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.192 22:13:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:59.192 22:13:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.192 22:13:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:59.192 Found net devices under 0000:84:00.0: cvl_0_0 00:21:59.192 22:13:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.192 22:13:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:59.192 22:13:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.192 22:13:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:59.192 22:13:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.192 22:13:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:59.192 Found net devices under 0000:84:00.1: cvl_0_1 00:21:59.192 22:13:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.192 22:13:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:59.192 22:13:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:59.192 22:13:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:59.192 22:13:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:59.192 22:13:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:59.192 22:13:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.192 22:13:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:59.192 22:13:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:59.192 22:13:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:59.192 22:13:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:59.192 22:13:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:59.192 22:13:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:59.192 22:13:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:59.192 22:13:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.192 22:13:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:59.192 22:13:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:59.192 22:13:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:59.192 22:13:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:59.192 22:13:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:59.192 22:13:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:59.192 22:13:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:59.192 22:13:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:59.450 22:13:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:59.450 22:13:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:59.450 22:13:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:59.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:21:59.450 00:21:59.450 --- 10.0.0.2 ping statistics --- 00:21:59.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.450 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:21:59.450 22:13:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:59.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:21:59.450 00:21:59.450 --- 10.0.0.1 ping statistics --- 00:21:59.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.450 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:21:59.450 22:13:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.450 22:13:41 -- nvmf/common.sh@411 -- # return 0 00:21:59.450 22:13:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:59.450 22:13:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.450 22:13:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:59.450 22:13:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:59.450 22:13:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.450 22:13:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:59.450 22:13:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:59.450 22:13:41 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:21:59.450 22:13:41 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:21:59.450 22:13:41 -- nvmf/common.sh@717 -- # local ip 00:21:59.450 22:13:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:59.450 22:13:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:59.450 22:13:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:59.450 22:13:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:59.450 22:13:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:21:59.450 22:13:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:59.450 22:13:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:21:59.450 22:13:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:21:59.450 22:13:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:21:59.450 22:13:41 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:21:59.450 22:13:41 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:59.450 22:13:41 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:59.450 22:13:41 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:21:59.450 22:13:41 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:59.450 22:13:41 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:59.450 22:13:41 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:59.450 22:13:41 -- nvmf/common.sh@628 -- # local block nvme 00:21:59.450 22:13:41 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:21:59.450 22:13:41 -- nvmf/common.sh@631 -- # modprobe nvmet 00:21:59.450 22:13:41 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:59.450 22:13:41 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:00.823 Waiting for block devices as requested 00:22:00.823 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:22:00.823 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:00.823 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:00.823 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:00.823 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:01.081 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:01.081 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:01.081 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:01.081 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:01.339 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:01.339 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:01.339 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:01.339 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:01.597 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:01.597 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:01.597 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:01.597 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:01.856 22:13:43 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:22:01.856 22:13:43 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:01.856 22:13:43 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:22:01.856 22:13:43 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:22:01.856 22:13:43 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:01.856 22:13:43 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:01.856 22:13:43 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:22:01.856 22:13:43 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:01.856 22:13:43 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:01.856 No valid GPT data, bailing 00:22:01.856 22:13:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:01.856 22:13:43 -- scripts/common.sh@391 -- # pt= 00:22:01.856 22:13:43 -- scripts/common.sh@392 -- # return 1 00:22:01.856 22:13:43 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:22:01.856 22:13:43 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:22:01.857 22:13:43 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:01.857 22:13:43 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:01.857 22:13:43 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:01.857 22:13:43 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:01.857 22:13:43 -- nvmf/common.sh@656 -- # echo 1 00:22:01.857 22:13:43 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:22:01.857 22:13:43 -- nvmf/common.sh@658 -- # echo 1 00:22:01.857 22:13:43 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:22:01.857 22:13:43 -- nvmf/common.sh@661 -- # echo tcp 00:22:01.857 22:13:43 -- nvmf/common.sh@662 -- # echo 4420 00:22:01.857 22:13:43 -- nvmf/common.sh@663 -- # echo ipv4 00:22:01.857 22:13:43 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:01.857 22:13:43 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:22:01.857 00:22:01.857 Discovery Log Number of Records 2, Generation counter 2 00:22:01.857 =====Discovery Log Entry 0====== 00:22:01.857 trtype: tcp 00:22:01.857 adrfam: ipv4 00:22:01.857 subtype: current discovery subsystem 00:22:01.857 treq: not specified, sq flow control disable supported 00:22:01.857 portid: 1 00:22:01.857 trsvcid: 4420 00:22:01.857 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:01.857 traddr: 10.0.0.1 00:22:01.857 eflags: none 00:22:01.857 sectype: none 00:22:01.857 =====Discovery Log Entry 1====== 00:22:01.857 trtype: tcp 00:22:01.857 adrfam: ipv4 00:22:01.857 subtype: nvme subsystem 00:22:01.857 treq: not specified, sq flow control disable supported 00:22:01.857 portid: 1 00:22:01.857 trsvcid: 4420 00:22:01.857 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:01.857 traddr: 10.0.0.1 00:22:01.857 eflags: none 00:22:01.857 sectype: none 00:22:01.857 22:13:44 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:22:01.857 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:01.857 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.857 ===================================================== 00:22:01.857 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:01.857 ===================================================== 00:22:01.857 Controller Capabilities/Features 00:22:01.857 ================================ 00:22:01.857 Vendor ID: 0000 00:22:01.857 Subsystem Vendor ID: 0000 00:22:01.857 Serial Number: 0a7751cf232fc14a3895 00:22:01.857 Model Number: Linux 00:22:01.857 Firmware Version: 6.7.0-68 00:22:01.857 Recommended Arb Burst: 0 00:22:01.857 IEEE OUI Identifier: 00 00 00 00:22:01.857 Multi-path I/O 00:22:01.857 May have multiple subsystem ports: No 00:22:01.857 May have multiple controllers: No 00:22:01.857 Associated with SR-IOV VF: No 00:22:01.857 Max Data Transfer Size: Unlimited 00:22:01.857 Max Number of Namespaces: 0 00:22:01.857 Max Number of I/O Queues: 1024 00:22:01.857 NVMe Specification Version (VS): 1.3 00:22:01.857 NVMe Specification Version (Identify): 1.3 00:22:01.857 Maximum Queue Entries: 1024 00:22:01.857 Contiguous Queues Required: No 00:22:01.857 Arbitration Mechanisms Supported 00:22:01.857 Weighted Round Robin: Not Supported 00:22:01.857 Vendor Specific: Not Supported 00:22:01.857 Reset Timeout: 7500 ms 00:22:01.857 Doorbell Stride: 4 bytes 00:22:01.857 NVM Subsystem Reset: Not Supported 00:22:01.857 Command Sets Supported 00:22:01.857 NVM Command Set: Supported 00:22:01.857 Boot Partition: Not Supported 00:22:01.857 Memory Page Size Minimum: 4096 bytes 00:22:01.857 Memory Page Size Maximum: 4096 bytes 00:22:01.857 Persistent Memory Region: Not Supported 00:22:01.857 Optional Asynchronous Events Supported 00:22:01.857 Namespace Attribute Notices: Not Supported 00:22:01.857 Firmware Activation Notices: Not Supported 00:22:01.857 ANA Change Notices: Not Supported 00:22:01.857 PLE Aggregate Log Change Notices: Not Supported 00:22:01.857 LBA Status Info Alert Notices: Not Supported 00:22:01.857 EGE Aggregate Log Change Notices: Not Supported 00:22:01.857 Normal NVM Subsystem Shutdown event: Not Supported 00:22:01.857 Zone Descriptor Change Notices: Not Supported 00:22:01.857 Discovery Log Change Notices: Supported 00:22:01.857 Controller Attributes 00:22:01.857 128-bit Host Identifier: Not Supported 00:22:01.857 Non-Operational Permissive Mode: Not Supported 00:22:01.857 NVM Sets: Not Supported 00:22:01.857 Read Recovery Levels: Not Supported 00:22:01.857 Endurance Groups: Not Supported 00:22:01.857 Predictable Latency Mode: Not Supported 00:22:01.857 Traffic Based Keep ALive: Not Supported 00:22:01.857 Namespace Granularity: Not Supported 00:22:01.857 SQ Associations: Not Supported 00:22:01.857 UUID List: Not Supported 00:22:01.857 Multi-Domain Subsystem: Not Supported 00:22:01.857 Fixed Capacity Management: Not Supported 00:22:01.857 Variable Capacity Management: Not Supported 00:22:01.857 Delete Endurance Group: Not Supported 00:22:01.857 Delete NVM Set: Not Supported 00:22:01.857 Extended LBA Formats Supported: Not Supported 00:22:01.857 Flexible Data Placement Supported: Not Supported 00:22:01.857 00:22:01.857 Controller Memory Buffer Support 00:22:01.857 ================================ 00:22:01.857 Supported: No 00:22:01.857 00:22:01.857 Persistent Memory Region Support 00:22:01.857 ================================ 00:22:01.857 Supported: No 00:22:01.857 00:22:01.857 Admin Command Set Attributes 00:22:01.857 ============================ 00:22:01.857 Security Send/Receive: Not Supported 00:22:01.857 Format NVM: Not Supported 00:22:01.857 Firmware Activate/Download: Not Supported 00:22:01.857 Namespace Management: Not Supported 00:22:01.857 Device Self-Test: Not Supported 00:22:01.857 Directives: Not Supported 00:22:01.857 NVMe-MI: Not Supported 00:22:01.857 Virtualization Management: Not Supported 00:22:01.857 Doorbell Buffer Config: Not Supported 00:22:01.857 Get LBA Status Capability: Not Supported 00:22:01.857 Command & Feature Lockdown Capability: Not Supported 00:22:01.857 Abort Command Limit: 1 00:22:01.857 Async Event Request Limit: 1 00:22:01.857 Number of Firmware Slots: N/A 00:22:01.857 Firmware Slot 1 Read-Only: N/A 00:22:01.857 Firmware Activation Without Reset: N/A 00:22:01.857 Multiple Update Detection Support: N/A 00:22:01.857 Firmware Update Granularity: No Information Provided 00:22:01.857 Per-Namespace SMART Log: No 00:22:01.857 Asymmetric Namespace Access Log Page: Not Supported 00:22:01.857 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:01.857 Command Effects Log Page: Not Supported 00:22:01.857 Get Log Page Extended Data: Supported 00:22:01.857 Telemetry Log Pages: Not Supported 00:22:01.857 Persistent Event Log Pages: Not Supported 00:22:01.857 Supported Log Pages Log Page: May Support 00:22:01.857 Commands Supported & Effects Log Page: Not Supported 00:22:01.857 Feature Identifiers & Effects Log Page:May Support 00:22:01.857 NVMe-MI Commands & Effects Log Page: May Support 00:22:01.857 Data Area 4 for Telemetry Log: Not Supported 00:22:01.857 Error Log Page Entries Supported: 1 00:22:01.857 Keep Alive: Not Supported 00:22:01.857 00:22:01.857 NVM Command Set Attributes 00:22:01.857 ========================== 00:22:01.857 Submission Queue Entry Size 00:22:01.857 Max: 1 00:22:01.857 Min: 1 00:22:01.857 Completion Queue Entry Size 00:22:01.857 Max: 1 00:22:01.857 Min: 1 00:22:01.857 Number of Namespaces: 0 00:22:01.857 Compare Command: Not Supported 00:22:01.857 Write Uncorrectable Command: Not Supported 00:22:01.857 Dataset Management Command: Not Supported 00:22:01.857 Write Zeroes Command: Not Supported 00:22:01.857 Set Features Save Field: Not Supported 00:22:01.857 Reservations: Not Supported 00:22:01.857 Timestamp: Not Supported 00:22:01.857 Copy: Not Supported 00:22:01.857 Volatile Write Cache: Not Present 00:22:01.857 Atomic Write Unit (Normal): 1 00:22:01.857 Atomic Write Unit (PFail): 1 00:22:01.857 Atomic Compare & Write Unit: 1 00:22:01.857 Fused Compare & Write: Not Supported 00:22:01.857 Scatter-Gather List 00:22:01.857 SGL Command Set: Supported 00:22:01.857 SGL Keyed: Not Supported 00:22:01.857 SGL Bit Bucket Descriptor: Not Supported 00:22:01.857 SGL Metadata Pointer: Not Supported 00:22:01.857 Oversized SGL: Not Supported 00:22:01.857 SGL Metadata Address: Not Supported 00:22:01.857 SGL Offset: Supported 00:22:01.857 Transport SGL Data Block: Not Supported 00:22:01.857 Replay Protected Memory Block: Not Supported 00:22:01.857 00:22:01.857 Firmware Slot Information 00:22:01.857 ========================= 00:22:01.857 Active slot: 0 00:22:01.857 00:22:01.857 00:22:01.857 Error Log 00:22:01.857 ========= 00:22:01.857 00:22:01.857 Active Namespaces 00:22:01.857 ================= 00:22:01.857 Discovery Log Page 00:22:01.857 ================== 00:22:01.857 Generation Counter: 2 00:22:01.857 Number of Records: 2 00:22:01.857 Record Format: 0 00:22:01.857 00:22:01.857 Discovery Log Entry 0 00:22:01.857 ---------------------- 00:22:01.857 Transport Type: 3 (TCP) 00:22:01.857 Address Family: 1 (IPv4) 00:22:01.857 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:01.858 Entry Flags: 00:22:01.858 Duplicate Returned Information: 0 00:22:01.858 Explicit Persistent Connection Support for Discovery: 0 00:22:01.858 Transport Requirements: 00:22:01.858 Secure Channel: Not Specified 00:22:01.858 Port ID: 1 (0x0001) 00:22:01.858 Controller ID: 65535 (0xffff) 00:22:01.858 Admin Max SQ Size: 32 00:22:01.858 Transport Service Identifier: 4420 00:22:01.858 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:01.858 Transport Address: 10.0.0.1 00:22:01.858 Discovery Log Entry 1 00:22:01.858 ---------------------- 00:22:01.858 Transport Type: 3 (TCP) 00:22:01.858 Address Family: 1 (IPv4) 00:22:01.858 Subsystem Type: 2 (NVM Subsystem) 00:22:01.858 Entry Flags: 00:22:01.858 Duplicate Returned Information: 0 00:22:01.858 Explicit Persistent Connection Support for Discovery: 0 00:22:01.858 Transport Requirements: 00:22:01.858 Secure Channel: Not Specified 00:22:01.858 Port ID: 1 (0x0001) 00:22:01.858 Controller ID: 65535 (0xffff) 00:22:01.858 Admin Max SQ Size: 32 00:22:01.858 Transport Service Identifier: 4420 00:22:01.858 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:01.858 Transport Address: 10.0.0.1 00:22:01.858 22:13:44 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:01.858 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.117 get_feature(0x01) failed 00:22:02.117 get_feature(0x02) failed 00:22:02.117 get_feature(0x04) failed 00:22:02.117 ===================================================== 00:22:02.117 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:02.117 ===================================================== 00:22:02.117 Controller Capabilities/Features 00:22:02.117 ================================ 00:22:02.117 Vendor ID: 0000 00:22:02.117 Subsystem Vendor ID: 0000 00:22:02.117 Serial Number: 984f76f61790306e52bf 00:22:02.117 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:02.117 Firmware Version: 6.7.0-68 00:22:02.117 Recommended Arb Burst: 6 00:22:02.117 IEEE OUI Identifier: 00 00 00 00:22:02.117 Multi-path I/O 00:22:02.117 May have multiple subsystem ports: Yes 00:22:02.117 May have multiple controllers: Yes 00:22:02.117 Associated with SR-IOV VF: No 00:22:02.117 Max Data Transfer Size: Unlimited 00:22:02.117 Max Number of Namespaces: 1024 00:22:02.117 Max Number of I/O Queues: 128 00:22:02.117 NVMe Specification Version (VS): 1.3 00:22:02.117 NVMe Specification Version (Identify): 1.3 00:22:02.117 Maximum Queue Entries: 1024 00:22:02.117 Contiguous Queues Required: No 00:22:02.117 Arbitration Mechanisms Supported 00:22:02.117 Weighted Round Robin: Not Supported 00:22:02.117 Vendor Specific: Not Supported 00:22:02.117 Reset Timeout: 7500 ms 00:22:02.117 Doorbell Stride: 4 bytes 00:22:02.117 NVM Subsystem Reset: Not Supported 00:22:02.117 Command Sets Supported 00:22:02.117 NVM Command Set: Supported 00:22:02.117 Boot Partition: Not Supported 00:22:02.117 Memory Page Size Minimum: 4096 bytes 00:22:02.117 Memory Page Size Maximum: 4096 bytes 00:22:02.117 Persistent Memory Region: Not Supported 00:22:02.117 Optional Asynchronous Events Supported 00:22:02.117 Namespace Attribute Notices: Supported 00:22:02.117 Firmware Activation Notices: Not Supported 00:22:02.117 ANA Change Notices: Supported 00:22:02.117 PLE Aggregate Log Change Notices: Not Supported 00:22:02.117 LBA Status Info Alert Notices: Not Supported 00:22:02.117 EGE Aggregate Log Change Notices: Not Supported 00:22:02.117 Normal NVM Subsystem Shutdown event: Not Supported 00:22:02.117 Zone Descriptor Change Notices: Not Supported 00:22:02.117 Discovery Log Change Notices: Not Supported 00:22:02.117 Controller Attributes 00:22:02.117 128-bit Host Identifier: Supported 00:22:02.117 Non-Operational Permissive Mode: Not Supported 00:22:02.117 NVM Sets: Not Supported 00:22:02.117 Read Recovery Levels: Not Supported 00:22:02.117 Endurance Groups: Not Supported 00:22:02.117 Predictable Latency Mode: Not Supported 00:22:02.117 Traffic Based Keep ALive: Supported 00:22:02.117 Namespace Granularity: Not Supported 00:22:02.117 SQ Associations: Not Supported 00:22:02.117 UUID List: Not Supported 00:22:02.117 Multi-Domain Subsystem: Not Supported 00:22:02.117 Fixed Capacity Management: Not Supported 00:22:02.117 Variable Capacity Management: Not Supported 00:22:02.117 Delete Endurance Group: Not Supported 00:22:02.117 Delete NVM Set: Not Supported 00:22:02.117 Extended LBA Formats Supported: Not Supported 00:22:02.117 Flexible Data Placement Supported: Not Supported 00:22:02.117 00:22:02.117 Controller Memory Buffer Support 00:22:02.117 ================================ 00:22:02.117 Supported: No 00:22:02.117 00:22:02.117 Persistent Memory Region Support 00:22:02.117 ================================ 00:22:02.117 Supported: No 00:22:02.117 00:22:02.117 Admin Command Set Attributes 00:22:02.117 ============================ 00:22:02.117 Security Send/Receive: Not Supported 00:22:02.117 Format NVM: Not Supported 00:22:02.117 Firmware Activate/Download: Not Supported 00:22:02.117 Namespace Management: Not Supported 00:22:02.117 Device Self-Test: Not Supported 00:22:02.117 Directives: Not Supported 00:22:02.117 NVMe-MI: Not Supported 00:22:02.117 Virtualization Management: Not Supported 00:22:02.117 Doorbell Buffer Config: Not Supported 00:22:02.117 Get LBA Status Capability: Not Supported 00:22:02.117 Command & Feature Lockdown Capability: Not Supported 00:22:02.117 Abort Command Limit: 4 00:22:02.117 Async Event Request Limit: 4 00:22:02.117 Number of Firmware Slots: N/A 00:22:02.117 Firmware Slot 1 Read-Only: N/A 00:22:02.117 Firmware Activation Without Reset: N/A 00:22:02.117 Multiple Update Detection Support: N/A 00:22:02.117 Firmware Update Granularity: No Information Provided 00:22:02.117 Per-Namespace SMART Log: Yes 00:22:02.117 Asymmetric Namespace Access Log Page: Supported 00:22:02.117 ANA Transition Time : 10 sec 00:22:02.117 00:22:02.117 Asymmetric Namespace Access Capabilities 00:22:02.117 ANA Optimized State : Supported 00:22:02.117 ANA Non-Optimized State : Supported 00:22:02.117 ANA Inaccessible State : Supported 00:22:02.117 ANA Persistent Loss State : Supported 00:22:02.117 ANA Change State : Supported 00:22:02.117 ANAGRPID is not changed : No 00:22:02.117 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:02.117 00:22:02.117 ANA Group Identifier Maximum : 128 00:22:02.117 Number of ANA Group Identifiers : 128 00:22:02.117 Max Number of Allowed Namespaces : 1024 00:22:02.117 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:02.117 Command Effects Log Page: Supported 00:22:02.117 Get Log Page Extended Data: Supported 00:22:02.117 Telemetry Log Pages: Not Supported 00:22:02.117 Persistent Event Log Pages: Not Supported 00:22:02.117 Supported Log Pages Log Page: May Support 00:22:02.117 Commands Supported & Effects Log Page: Not Supported 00:22:02.117 Feature Identifiers & Effects Log Page:May Support 00:22:02.117 NVMe-MI Commands & Effects Log Page: May Support 00:22:02.117 Data Area 4 for Telemetry Log: Not Supported 00:22:02.117 Error Log Page Entries Supported: 128 00:22:02.117 Keep Alive: Supported 00:22:02.117 Keep Alive Granularity: 1000 ms 00:22:02.117 00:22:02.117 NVM Command Set Attributes 00:22:02.117 ========================== 00:22:02.117 Submission Queue Entry Size 00:22:02.117 Max: 64 00:22:02.117 Min: 64 00:22:02.117 Completion Queue Entry Size 00:22:02.117 Max: 16 00:22:02.117 Min: 16 00:22:02.117 Number of Namespaces: 1024 00:22:02.117 Compare Command: Not Supported 00:22:02.117 Write Uncorrectable Command: Not Supported 00:22:02.117 Dataset Management Command: Supported 00:22:02.117 Write Zeroes Command: Supported 00:22:02.117 Set Features Save Field: Not Supported 00:22:02.117 Reservations: Not Supported 00:22:02.117 Timestamp: Not Supported 00:22:02.117 Copy: Not Supported 00:22:02.117 Volatile Write Cache: Present 00:22:02.117 Atomic Write Unit (Normal): 1 00:22:02.117 Atomic Write Unit (PFail): 1 00:22:02.117 Atomic Compare & Write Unit: 1 00:22:02.117 Fused Compare & Write: Not Supported 00:22:02.117 Scatter-Gather List 00:22:02.117 SGL Command Set: Supported 00:22:02.117 SGL Keyed: Not Supported 00:22:02.117 SGL Bit Bucket Descriptor: Not Supported 00:22:02.117 SGL Metadata Pointer: Not Supported 00:22:02.117 Oversized SGL: Not Supported 00:22:02.117 SGL Metadata Address: Not Supported 00:22:02.117 SGL Offset: Supported 00:22:02.117 Transport SGL Data Block: Not Supported 00:22:02.117 Replay Protected Memory Block: Not Supported 00:22:02.117 00:22:02.117 Firmware Slot Information 00:22:02.117 ========================= 00:22:02.117 Active slot: 0 00:22:02.117 00:22:02.117 Asymmetric Namespace Access 00:22:02.117 =========================== 00:22:02.117 Change Count : 0 00:22:02.118 Number of ANA Group Descriptors : 1 00:22:02.118 ANA Group Descriptor : 0 00:22:02.118 ANA Group ID : 1 00:22:02.118 Number of NSID Values : 1 00:22:02.118 Change Count : 0 00:22:02.118 ANA State : 1 00:22:02.118 Namespace Identifier : 1 00:22:02.118 00:22:02.118 Commands Supported and Effects 00:22:02.118 ============================== 00:22:02.118 Admin Commands 00:22:02.118 -------------- 00:22:02.118 Get Log Page (02h): Supported 00:22:02.118 Identify (06h): Supported 00:22:02.118 Abort (08h): Supported 00:22:02.118 Set Features (09h): Supported 00:22:02.118 Get Features (0Ah): Supported 00:22:02.118 Asynchronous Event Request (0Ch): Supported 00:22:02.118 Keep Alive (18h): Supported 00:22:02.118 I/O Commands 00:22:02.118 ------------ 00:22:02.118 Flush (00h): Supported 00:22:02.118 Write (01h): Supported LBA-Change 00:22:02.118 Read (02h): Supported 00:22:02.118 Write Zeroes (08h): Supported LBA-Change 00:22:02.118 Dataset Management (09h): Supported 00:22:02.118 00:22:02.118 Error Log 00:22:02.118 ========= 00:22:02.118 Entry: 0 00:22:02.118 Error Count: 0x3 00:22:02.118 Submission Queue Id: 0x0 00:22:02.118 Command Id: 0x5 00:22:02.118 Phase Bit: 0 00:22:02.118 Status Code: 0x2 00:22:02.118 Status Code Type: 0x0 00:22:02.118 Do Not Retry: 1 00:22:02.118 Error Location: 0x28 00:22:02.118 LBA: 0x0 00:22:02.118 Namespace: 0x0 00:22:02.118 Vendor Log Page: 0x0 00:22:02.118 ----------- 00:22:02.118 Entry: 1 00:22:02.118 Error Count: 0x2 00:22:02.118 Submission Queue Id: 0x0 00:22:02.118 Command Id: 0x5 00:22:02.118 Phase Bit: 0 00:22:02.118 Status Code: 0x2 00:22:02.118 Status Code Type: 0x0 00:22:02.118 Do Not Retry: 1 00:22:02.118 Error Location: 0x28 00:22:02.118 LBA: 0x0 00:22:02.118 Namespace: 0x0 00:22:02.118 Vendor Log Page: 0x0 00:22:02.118 ----------- 00:22:02.118 Entry: 2 00:22:02.118 Error Count: 0x1 00:22:02.118 Submission Queue Id: 0x0 00:22:02.118 Command Id: 0x4 00:22:02.118 Phase Bit: 0 00:22:02.118 Status Code: 0x2 00:22:02.118 Status Code Type: 0x0 00:22:02.118 Do Not Retry: 1 00:22:02.118 Error Location: 0x28 00:22:02.118 LBA: 0x0 00:22:02.118 Namespace: 0x0 00:22:02.118 Vendor Log Page: 0x0 00:22:02.118 00:22:02.118 Number of Queues 00:22:02.118 ================ 00:22:02.118 Number of I/O Submission Queues: 128 00:22:02.118 Number of I/O Completion Queues: 128 00:22:02.118 00:22:02.118 ZNS Specific Controller Data 00:22:02.118 ============================ 00:22:02.118 Zone Append Size Limit: 0 00:22:02.118 00:22:02.118 00:22:02.118 Active Namespaces 00:22:02.118 ================= 00:22:02.118 get_feature(0x05) failed 00:22:02.118 Namespace ID:1 00:22:02.118 Command Set Identifier: NVM (00h) 00:22:02.118 Deallocate: Supported 00:22:02.118 Deallocated/Unwritten Error: Not Supported 00:22:02.118 Deallocated Read Value: Unknown 00:22:02.118 Deallocate in Write Zeroes: Not Supported 00:22:02.118 Deallocated Guard Field: 0xFFFF 00:22:02.118 Flush: Supported 00:22:02.118 Reservation: Not Supported 00:22:02.118 Namespace Sharing Capabilities: Multiple Controllers 00:22:02.118 Size (in LBAs): 1953525168 (931GiB) 00:22:02.118 Capacity (in LBAs): 1953525168 (931GiB) 00:22:02.118 Utilization (in LBAs): 1953525168 (931GiB) 00:22:02.118 UUID: 8932e2e6-c14b-4210-9191-100ae8dde0ee 00:22:02.118 Thin Provisioning: Not Supported 00:22:02.118 Per-NS Atomic Units: Yes 00:22:02.118 Atomic Boundary Size (Normal): 0 00:22:02.118 Atomic Boundary Size (PFail): 0 00:22:02.118 Atomic Boundary Offset: 0 00:22:02.118 NGUID/EUI64 Never Reused: No 00:22:02.118 ANA group ID: 1 00:22:02.118 Namespace Write Protected: No 00:22:02.118 Number of LBA Formats: 1 00:22:02.118 Current LBA Format: LBA Format #00 00:22:02.118 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:02.118 00:22:02.118 22:13:44 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:02.118 22:13:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:02.118 22:13:44 -- nvmf/common.sh@117 -- # sync 00:22:02.118 22:13:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:02.118 22:13:44 -- nvmf/common.sh@120 -- # set +e 00:22:02.118 22:13:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:02.118 22:13:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:02.118 rmmod nvme_tcp 00:22:02.118 rmmod nvme_fabrics 00:22:02.118 22:13:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:02.118 22:13:44 -- nvmf/common.sh@124 -- # set -e 00:22:02.118 22:13:44 -- nvmf/common.sh@125 -- # return 0 00:22:02.118 22:13:44 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:22:02.118 22:13:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:02.118 22:13:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:02.118 22:13:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:02.118 22:13:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:02.118 22:13:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:02.118 22:13:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.118 22:13:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:02.118 22:13:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.020 22:13:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:04.020 22:13:46 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:22:04.020 22:13:46 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:04.020 22:13:46 -- nvmf/common.sh@675 -- # echo 0 00:22:04.020 22:13:46 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:04.020 22:13:46 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:04.020 22:13:46 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:04.020 22:13:46 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:04.020 22:13:46 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:22:04.020 22:13:46 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:22:04.278 22:13:46 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:05.653 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:05.653 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:05.653 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:05.653 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:05.653 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:05.653 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:05.653 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:05.653 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:05.653 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:05.653 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:05.653 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:05.653 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:05.653 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:05.653 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:05.653 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:05.653 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:06.587 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:22:06.587 00:22:06.587 real 0m9.732s 00:22:06.587 user 0m2.098s 00:22:06.587 sys 0m3.823s 00:22:06.587 22:13:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:06.587 22:13:48 -- common/autotest_common.sh@10 -- # set +x 00:22:06.587 ************************************ 00:22:06.587 END TEST nvmf_identify_kernel_target 00:22:06.587 ************************************ 00:22:06.846 22:13:48 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:06.846 22:13:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:06.846 22:13:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:06.846 22:13:48 -- common/autotest_common.sh@10 -- # set +x 00:22:06.846 ************************************ 00:22:06.846 START TEST nvmf_auth 00:22:06.846 ************************************ 00:22:06.846 22:13:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:06.846 * Looking for test storage... 00:22:06.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:06.846 22:13:49 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:06.846 22:13:49 -- nvmf/common.sh@7 -- # uname -s 00:22:06.846 22:13:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:06.846 22:13:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:06.846 22:13:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:06.846 22:13:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:06.846 22:13:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:06.846 22:13:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:06.846 22:13:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:06.846 22:13:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:06.846 22:13:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:06.846 22:13:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:06.846 22:13:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:06.846 22:13:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:06.846 22:13:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:06.846 22:13:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:06.846 22:13:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:06.846 22:13:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:06.846 22:13:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:06.846 22:13:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:06.846 22:13:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:06.846 22:13:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:06.846 22:13:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.847 22:13:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.847 22:13:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.847 22:13:49 -- paths/export.sh@5 -- # export PATH 00:22:06.847 22:13:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.847 22:13:49 -- nvmf/common.sh@47 -- # : 0 00:22:06.847 22:13:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:06.847 22:13:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:06.847 22:13:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:06.847 22:13:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:06.847 22:13:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:06.847 22:13:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:06.847 22:13:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:06.847 22:13:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:06.847 22:13:49 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:06.847 22:13:49 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:06.847 22:13:49 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:06.847 22:13:49 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:06.847 22:13:49 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:06.847 22:13:49 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:06.847 22:13:49 -- host/auth.sh@21 -- # keys=() 00:22:06.847 22:13:49 -- host/auth.sh@77 -- # nvmftestinit 00:22:06.847 22:13:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:06.847 22:13:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:06.847 22:13:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:06.847 22:13:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:06.847 22:13:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:06.847 22:13:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.847 22:13:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:06.847 22:13:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.847 22:13:49 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:06.847 22:13:49 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:06.847 22:13:49 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:06.847 22:13:49 -- common/autotest_common.sh@10 -- # set +x 00:22:09.376 22:13:51 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:09.376 22:13:51 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:09.376 22:13:51 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:09.376 22:13:51 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:09.376 22:13:51 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:09.376 22:13:51 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:09.376 22:13:51 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:09.376 22:13:51 -- nvmf/common.sh@295 -- # net_devs=() 00:22:09.377 22:13:51 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:09.377 22:13:51 -- nvmf/common.sh@296 -- # e810=() 00:22:09.377 22:13:51 -- nvmf/common.sh@296 -- # local -ga e810 00:22:09.377 22:13:51 -- nvmf/common.sh@297 -- # x722=() 00:22:09.377 22:13:51 -- nvmf/common.sh@297 -- # local -ga x722 00:22:09.377 22:13:51 -- nvmf/common.sh@298 -- # mlx=() 00:22:09.377 22:13:51 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:09.377 22:13:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:09.377 22:13:51 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:09.377 22:13:51 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:09.377 22:13:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:09.377 22:13:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:09.377 22:13:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:09.377 22:13:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:09.377 22:13:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:09.377 22:13:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:09.377 22:13:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:09.377 22:13:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:09.377 22:13:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:09.377 22:13:51 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:09.377 22:13:51 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:09.377 22:13:51 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:09.377 22:13:51 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:09.377 22:13:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:09.377 22:13:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:09.377 22:13:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:09.377 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:09.377 22:13:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:09.377 22:13:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:09.377 22:13:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.377 22:13:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.377 22:13:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:09.377 22:13:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:09.377 22:13:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:09.377 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:09.377 22:13:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:09.377 22:13:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:09.377 22:13:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.377 22:13:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.377 22:13:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:09.377 22:13:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:09.377 22:13:51 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:09.377 22:13:51 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:09.377 22:13:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:09.377 22:13:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.377 22:13:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:09.377 22:13:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.377 22:13:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:09.377 Found net devices under 0000:84:00.0: cvl_0_0 00:22:09.377 22:13:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.377 22:13:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:09.377 22:13:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.377 22:13:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:09.377 22:13:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.377 22:13:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:09.377 Found net devices under 0000:84:00.1: cvl_0_1 00:22:09.377 22:13:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.377 22:13:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:09.377 22:13:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:09.377 22:13:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:09.377 22:13:51 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:09.377 22:13:51 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:09.377 22:13:51 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:09.377 22:13:51 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:09.377 22:13:51 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:09.377 22:13:51 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:09.377 22:13:51 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:09.377 22:13:51 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:09.377 22:13:51 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:09.377 22:13:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:09.377 22:13:51 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:09.377 22:13:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:09.377 22:13:51 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:09.377 22:13:51 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:09.377 22:13:51 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:09.377 22:13:51 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:09.377 22:13:51 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:09.377 22:13:51 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:09.377 22:13:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:09.377 22:13:51 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:09.377 22:13:51 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:09.377 22:13:51 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:09.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:09.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:22:09.377 00:22:09.377 --- 10.0.0.2 ping statistics --- 00:22:09.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.377 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:22:09.377 22:13:51 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:09.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:09.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:22:09.377 00:22:09.377 --- 10.0.0.1 ping statistics --- 00:22:09.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.377 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:22:09.377 22:13:51 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:09.377 22:13:51 -- nvmf/common.sh@411 -- # return 0 00:22:09.377 22:13:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:09.377 22:13:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:09.377 22:13:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:09.377 22:13:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:09.377 22:13:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:09.377 22:13:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:09.377 22:13:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:09.377 22:13:51 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:22:09.377 22:13:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:09.377 22:13:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:09.377 22:13:51 -- common/autotest_common.sh@10 -- # set +x 00:22:09.377 22:13:51 -- nvmf/common.sh@470 -- # nvmfpid=4018354 00:22:09.377 22:13:51 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:22:09.377 22:13:51 -- nvmf/common.sh@471 -- # waitforlisten 4018354 00:22:09.377 22:13:51 -- common/autotest_common.sh@817 -- # '[' -z 4018354 ']' 00:22:09.377 22:13:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.377 22:13:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:09.377 22:13:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.377 22:13:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:09.377 22:13:51 -- common/autotest_common.sh@10 -- # set +x 00:22:09.944 22:13:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:09.944 22:13:51 -- common/autotest_common.sh@850 -- # return 0 00:22:09.944 22:13:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:09.944 22:13:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:09.944 22:13:51 -- common/autotest_common.sh@10 -- # set +x 00:22:09.944 22:13:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.944 22:13:51 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:22:09.944 22:13:51 -- host/auth.sh@81 -- # gen_key null 32 00:22:09.944 22:13:51 -- host/auth.sh@53 -- # local digest len file key 00:22:09.944 22:13:51 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:09.944 22:13:51 -- host/auth.sh@54 -- # local -A digests 00:22:09.944 22:13:51 -- host/auth.sh@56 -- # digest=null 00:22:09.944 22:13:51 -- host/auth.sh@56 -- # len=32 00:22:09.944 22:13:51 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:09.944 22:13:51 -- host/auth.sh@57 -- # key=569b82be7adbaf8ced20a2d79c5df6e0 00:22:09.944 22:13:51 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:22:09.944 22:13:51 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.6U8 00:22:09.944 22:13:51 -- host/auth.sh@59 -- # format_dhchap_key 569b82be7adbaf8ced20a2d79c5df6e0 0 00:22:09.944 22:13:51 -- nvmf/common.sh@708 -- # format_key DHHC-1 569b82be7adbaf8ced20a2d79c5df6e0 0 00:22:09.944 22:13:51 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:09.944 22:13:51 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:09.944 22:13:51 -- nvmf/common.sh@693 -- # key=569b82be7adbaf8ced20a2d79c5df6e0 00:22:09.944 22:13:51 -- nvmf/common.sh@693 -- # digest=0 00:22:09.944 22:13:51 -- nvmf/common.sh@694 -- # python - 00:22:09.944 22:13:51 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.6U8 00:22:09.944 22:13:51 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.6U8 00:22:09.944 22:13:51 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.6U8 00:22:09.944 22:13:51 -- host/auth.sh@82 -- # gen_key null 48 00:22:09.944 22:13:51 -- host/auth.sh@53 -- # local digest len file key 00:22:09.944 22:13:51 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:09.944 22:13:51 -- host/auth.sh@54 -- # local -A digests 00:22:09.944 22:13:51 -- host/auth.sh@56 -- # digest=null 00:22:09.944 22:13:51 -- host/auth.sh@56 -- # len=48 00:22:09.944 22:13:51 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:09.944 22:13:52 -- host/auth.sh@57 -- # key=21b67d5f3edbb432512ceb0ee6f79af8190fb601bcbd5474 00:22:09.944 22:13:52 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:22:09.944 22:13:52 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.5s8 00:22:09.944 22:13:52 -- host/auth.sh@59 -- # format_dhchap_key 21b67d5f3edbb432512ceb0ee6f79af8190fb601bcbd5474 0 00:22:09.945 22:13:52 -- nvmf/common.sh@708 -- # format_key DHHC-1 21b67d5f3edbb432512ceb0ee6f79af8190fb601bcbd5474 0 00:22:09.945 22:13:52 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:09.945 22:13:52 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:09.945 22:13:52 -- nvmf/common.sh@693 -- # key=21b67d5f3edbb432512ceb0ee6f79af8190fb601bcbd5474 00:22:09.945 22:13:52 -- nvmf/common.sh@693 -- # digest=0 00:22:09.945 22:13:52 -- nvmf/common.sh@694 -- # python - 00:22:09.945 22:13:52 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.5s8 00:22:09.945 22:13:52 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.5s8 00:22:09.945 22:13:52 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.5s8 00:22:09.945 22:13:52 -- host/auth.sh@83 -- # gen_key sha256 32 00:22:09.945 22:13:52 -- host/auth.sh@53 -- # local digest len file key 00:22:09.945 22:13:52 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:09.945 22:13:52 -- host/auth.sh@54 -- # local -A digests 00:22:09.945 22:13:52 -- host/auth.sh@56 -- # digest=sha256 00:22:09.945 22:13:52 -- host/auth.sh@56 -- # len=32 00:22:09.945 22:13:52 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:09.945 22:13:52 -- host/auth.sh@57 -- # key=ac0b3490bcadb6211cbf2cef62b819f2 00:22:09.945 22:13:52 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:22:09.945 22:13:52 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.Bmq 00:22:09.945 22:13:52 -- host/auth.sh@59 -- # format_dhchap_key ac0b3490bcadb6211cbf2cef62b819f2 1 00:22:09.945 22:13:52 -- nvmf/common.sh@708 -- # format_key DHHC-1 ac0b3490bcadb6211cbf2cef62b819f2 1 00:22:09.945 22:13:52 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:09.945 22:13:52 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:09.945 22:13:52 -- nvmf/common.sh@693 -- # key=ac0b3490bcadb6211cbf2cef62b819f2 00:22:09.945 22:13:52 -- nvmf/common.sh@693 -- # digest=1 00:22:09.945 22:13:52 -- nvmf/common.sh@694 -- # python - 00:22:09.945 22:13:52 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.Bmq 00:22:09.945 22:13:52 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.Bmq 00:22:09.945 22:13:52 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.Bmq 00:22:09.945 22:13:52 -- host/auth.sh@84 -- # gen_key sha384 48 00:22:09.945 22:13:52 -- host/auth.sh@53 -- # local digest len file key 00:22:09.945 22:13:52 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:09.945 22:13:52 -- host/auth.sh@54 -- # local -A digests 00:22:09.945 22:13:52 -- host/auth.sh@56 -- # digest=sha384 00:22:09.945 22:13:52 -- host/auth.sh@56 -- # len=48 00:22:09.945 22:13:52 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:09.945 22:13:52 -- host/auth.sh@57 -- # key=e690d664cb15161f75b201205a9e555db5162e4661949b9e 00:22:09.945 22:13:52 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:22:09.945 22:13:52 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.fNi 00:22:09.945 22:13:52 -- host/auth.sh@59 -- # format_dhchap_key e690d664cb15161f75b201205a9e555db5162e4661949b9e 2 00:22:09.945 22:13:52 -- nvmf/common.sh@708 -- # format_key DHHC-1 e690d664cb15161f75b201205a9e555db5162e4661949b9e 2 00:22:09.945 22:13:52 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:09.945 22:13:52 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:09.945 22:13:52 -- nvmf/common.sh@693 -- # key=e690d664cb15161f75b201205a9e555db5162e4661949b9e 00:22:09.945 22:13:52 -- nvmf/common.sh@693 -- # digest=2 00:22:09.945 22:13:52 -- nvmf/common.sh@694 -- # python - 00:22:09.945 22:13:52 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.fNi 00:22:09.945 22:13:52 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.fNi 00:22:09.945 22:13:52 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.fNi 00:22:09.945 22:13:52 -- host/auth.sh@85 -- # gen_key sha512 64 00:22:09.945 22:13:52 -- host/auth.sh@53 -- # local digest len file key 00:22:09.945 22:13:52 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:09.945 22:13:52 -- host/auth.sh@54 -- # local -A digests 00:22:09.945 22:13:52 -- host/auth.sh@56 -- # digest=sha512 00:22:09.945 22:13:52 -- host/auth.sh@56 -- # len=64 00:22:09.945 22:13:52 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:09.945 22:13:52 -- host/auth.sh@57 -- # key=5024aab80393e5318dde5254c9d494b05cea738a51a67d6aa6dae23957c827c3 00:22:09.945 22:13:52 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:22:09.945 22:13:52 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.yMw 00:22:09.945 22:13:52 -- host/auth.sh@59 -- # format_dhchap_key 5024aab80393e5318dde5254c9d494b05cea738a51a67d6aa6dae23957c827c3 3 00:22:09.945 22:13:52 -- nvmf/common.sh@708 -- # format_key DHHC-1 5024aab80393e5318dde5254c9d494b05cea738a51a67d6aa6dae23957c827c3 3 00:22:09.945 22:13:52 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:09.945 22:13:52 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:09.945 22:13:52 -- nvmf/common.sh@693 -- # key=5024aab80393e5318dde5254c9d494b05cea738a51a67d6aa6dae23957c827c3 00:22:09.945 22:13:52 -- nvmf/common.sh@693 -- # digest=3 00:22:09.945 22:13:52 -- nvmf/common.sh@694 -- # python - 00:22:10.203 22:13:52 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.yMw 00:22:10.204 22:13:52 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.yMw 00:22:10.204 22:13:52 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.yMw 00:22:10.204 22:13:52 -- host/auth.sh@87 -- # waitforlisten 4018354 00:22:10.204 22:13:52 -- common/autotest_common.sh@817 -- # '[' -z 4018354 ']' 00:22:10.204 22:13:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.204 22:13:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:10.204 22:13:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.204 22:13:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:10.204 22:13:52 -- common/autotest_common.sh@10 -- # set +x 00:22:10.462 22:13:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:10.462 22:13:52 -- common/autotest_common.sh@850 -- # return 0 00:22:10.462 22:13:52 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:22:10.462 22:13:52 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.6U8 00:22:10.462 22:13:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.462 22:13:52 -- common/autotest_common.sh@10 -- # set +x 00:22:10.462 22:13:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.462 22:13:52 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:22:10.462 22:13:52 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.5s8 00:22:10.462 22:13:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.462 22:13:52 -- common/autotest_common.sh@10 -- # set +x 00:22:10.462 22:13:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.462 22:13:52 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:22:10.462 22:13:52 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Bmq 00:22:10.462 22:13:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.462 22:13:52 -- common/autotest_common.sh@10 -- # set +x 00:22:10.462 22:13:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.462 22:13:52 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:22:10.462 22:13:52 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.fNi 00:22:10.462 22:13:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.462 22:13:52 -- common/autotest_common.sh@10 -- # set +x 00:22:10.462 22:13:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.462 22:13:52 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:22:10.462 22:13:52 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.yMw 00:22:10.462 22:13:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.462 22:13:52 -- common/autotest_common.sh@10 -- # set +x 00:22:10.462 22:13:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.462 22:13:52 -- host/auth.sh@92 -- # nvmet_auth_init 00:22:10.462 22:13:52 -- host/auth.sh@35 -- # get_main_ns_ip 00:22:10.462 22:13:52 -- nvmf/common.sh@717 -- # local ip 00:22:10.462 22:13:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:10.462 22:13:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:10.462 22:13:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:10.462 22:13:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:10.462 22:13:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:10.462 22:13:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:10.462 22:13:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:10.462 22:13:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:10.463 22:13:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:10.463 22:13:52 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:22:10.463 22:13:52 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:22:10.463 22:13:52 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:22:10.463 22:13:52 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:10.463 22:13:52 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:10.463 22:13:52 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:10.463 22:13:52 -- nvmf/common.sh@628 -- # local block nvme 00:22:10.463 22:13:52 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:22:10.463 22:13:52 -- nvmf/common.sh@631 -- # modprobe nvmet 00:22:10.463 22:13:52 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:10.463 22:13:52 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:11.837 Waiting for block devices as requested 00:22:11.837 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:22:11.837 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:11.837 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:11.837 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:11.837 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:12.095 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:12.095 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:12.095 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:12.095 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:12.353 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:12.353 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:12.353 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:12.353 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:12.611 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:12.611 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:12.611 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:12.611 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:13.177 22:13:55 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:22:13.177 22:13:55 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:13.177 22:13:55 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:22:13.177 22:13:55 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:22:13.177 22:13:55 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:13.177 22:13:55 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:13.177 22:13:55 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:22:13.177 22:13:55 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:13.177 22:13:55 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:13.177 No valid GPT data, bailing 00:22:13.177 22:13:55 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:13.177 22:13:55 -- scripts/common.sh@391 -- # pt= 00:22:13.177 22:13:55 -- scripts/common.sh@392 -- # return 1 00:22:13.177 22:13:55 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:22:13.177 22:13:55 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:22:13.177 22:13:55 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:13.177 22:13:55 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:13.177 22:13:55 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:13.177 22:13:55 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:22:13.177 22:13:55 -- nvmf/common.sh@656 -- # echo 1 00:22:13.177 22:13:55 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:22:13.177 22:13:55 -- nvmf/common.sh@658 -- # echo 1 00:22:13.177 22:13:55 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:22:13.177 22:13:55 -- nvmf/common.sh@661 -- # echo tcp 00:22:13.177 22:13:55 -- nvmf/common.sh@662 -- # echo 4420 00:22:13.177 22:13:55 -- nvmf/common.sh@663 -- # echo ipv4 00:22:13.177 22:13:55 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:13.177 22:13:55 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:22:13.177 00:22:13.177 Discovery Log Number of Records 2, Generation counter 2 00:22:13.177 =====Discovery Log Entry 0====== 00:22:13.177 trtype: tcp 00:22:13.177 adrfam: ipv4 00:22:13.177 subtype: current discovery subsystem 00:22:13.177 treq: not specified, sq flow control disable supported 00:22:13.177 portid: 1 00:22:13.177 trsvcid: 4420 00:22:13.177 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:13.177 traddr: 10.0.0.1 00:22:13.177 eflags: none 00:22:13.177 sectype: none 00:22:13.177 =====Discovery Log Entry 1====== 00:22:13.177 trtype: tcp 00:22:13.177 adrfam: ipv4 00:22:13.177 subtype: nvme subsystem 00:22:13.177 treq: not specified, sq flow control disable supported 00:22:13.177 portid: 1 00:22:13.177 trsvcid: 4420 00:22:13.177 subnqn: nqn.2024-02.io.spdk:cnode0 00:22:13.177 traddr: 10.0.0.1 00:22:13.177 eflags: none 00:22:13.177 sectype: none 00:22:13.177 22:13:55 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:13.177 22:13:55 -- host/auth.sh@37 -- # echo 0 00:22:13.177 22:13:55 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:13.177 22:13:55 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:13.177 22:13:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:13.177 22:13:55 -- host/auth.sh@44 -- # digest=sha256 00:22:13.177 22:13:55 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:13.177 22:13:55 -- host/auth.sh@44 -- # keyid=1 00:22:13.177 22:13:55 -- host/auth.sh@45 -- # key=DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:13.177 22:13:55 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:13.177 22:13:55 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:13.177 22:13:55 -- host/auth.sh@49 -- # echo DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:13.177 22:13:55 -- host/auth.sh@100 -- # IFS=, 00:22:13.177 22:13:55 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:22:13.177 22:13:55 -- host/auth.sh@100 -- # IFS=, 00:22:13.177 22:13:55 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:13.177 22:13:55 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:22:13.177 22:13:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:13.177 22:13:55 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:22:13.177 22:13:55 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:13.177 22:13:55 -- host/auth.sh@68 -- # keyid=1 00:22:13.177 22:13:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:13.177 22:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:13.178 22:13:55 -- common/autotest_common.sh@10 -- # set +x 00:22:13.178 22:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:13.178 22:13:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:13.178 22:13:55 -- nvmf/common.sh@717 -- # local ip 00:22:13.178 22:13:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:13.178 22:13:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:13.178 22:13:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:13.178 22:13:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:13.178 22:13:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:13.178 22:13:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:13.178 22:13:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:13.178 22:13:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:13.178 22:13:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:13.178 22:13:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:13.178 22:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:13.178 22:13:55 -- common/autotest_common.sh@10 -- # set +x 00:22:13.178 nvme0n1 00:22:13.178 22:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:13.178 22:13:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:13.178 22:13:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:13.178 22:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:13.178 22:13:55 -- common/autotest_common.sh@10 -- # set +x 00:22:13.435 22:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:13.435 22:13:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.435 22:13:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.435 22:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:13.435 22:13:55 -- common/autotest_common.sh@10 -- # set +x 00:22:13.435 22:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:13.435 22:13:55 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:22:13.435 22:13:55 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:13.435 22:13:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:13.435 22:13:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:22:13.435 22:13:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:13.435 22:13:55 -- host/auth.sh@44 -- # digest=sha256 00:22:13.435 22:13:55 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:13.435 22:13:55 -- host/auth.sh@44 -- # keyid=0 00:22:13.435 22:13:55 -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:13.435 22:13:55 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:13.435 22:13:55 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:13.436 22:13:55 -- host/auth.sh@49 -- # echo DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:13.436 22:13:55 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:22:13.436 22:13:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:13.436 22:13:55 -- host/auth.sh@68 -- # digest=sha256 00:22:13.436 22:13:55 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:13.436 22:13:55 -- host/auth.sh@68 -- # keyid=0 00:22:13.436 22:13:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:13.436 22:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:13.436 22:13:55 -- common/autotest_common.sh@10 -- # set +x 00:22:13.436 22:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:13.436 22:13:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:13.436 22:13:55 -- nvmf/common.sh@717 -- # local ip 00:22:13.436 22:13:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:13.436 22:13:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:13.436 22:13:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:13.436 22:13:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:13.436 22:13:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:13.436 22:13:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:13.436 22:13:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:13.436 22:13:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:13.436 22:13:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:13.436 22:13:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:13.436 22:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:13.436 22:13:55 -- common/autotest_common.sh@10 -- # set +x 00:22:13.436 nvme0n1 00:22:13.436 22:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:13.436 22:13:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:13.436 22:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:13.436 22:13:55 -- common/autotest_common.sh@10 -- # set +x 00:22:13.436 22:13:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:13.436 22:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:13.436 22:13:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.436 22:13:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.436 22:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:13.436 22:13:55 -- common/autotest_common.sh@10 -- # set +x 00:22:13.730 22:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:13.730 22:13:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:13.730 22:13:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:13.730 22:13:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:13.730 22:13:55 -- host/auth.sh@44 -- # digest=sha256 00:22:13.730 22:13:55 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:13.730 22:13:55 -- host/auth.sh@44 -- # keyid=1 00:22:13.730 22:13:55 -- host/auth.sh@45 -- # key=DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:13.730 22:13:55 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:13.730 22:13:55 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:13.730 22:13:55 -- host/auth.sh@49 -- # echo DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:13.730 22:13:55 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:22:13.730 22:13:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:13.730 22:13:55 -- host/auth.sh@68 -- # digest=sha256 00:22:13.730 22:13:55 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:13.730 22:13:55 -- host/auth.sh@68 -- # keyid=1 00:22:13.730 22:13:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:13.730 22:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:13.730 22:13:55 -- common/autotest_common.sh@10 -- # set +x 00:22:13.730 22:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:13.730 22:13:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:13.730 22:13:55 -- nvmf/common.sh@717 -- # local ip 00:22:13.730 22:13:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:13.730 22:13:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:13.730 22:13:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:13.730 22:13:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:13.730 22:13:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:13.730 22:13:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:13.730 22:13:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:13.730 22:13:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:13.730 22:13:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:13.730 22:13:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:13.730 22:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:13.730 22:13:55 -- common/autotest_common.sh@10 -- # set +x 00:22:13.730 nvme0n1 00:22:13.730 22:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:13.730 22:13:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:13.730 22:13:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:13.731 22:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:13.731 22:13:55 -- common/autotest_common.sh@10 -- # set +x 00:22:13.731 22:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:13.731 22:13:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.731 22:13:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.731 22:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:13.731 22:13:55 -- common/autotest_common.sh@10 -- # set +x 00:22:13.731 22:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:13.731 22:13:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:13.731 22:13:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:13.731 22:13:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:13.731 22:13:55 -- host/auth.sh@44 -- # digest=sha256 00:22:13.731 22:13:55 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:13.731 22:13:55 -- host/auth.sh@44 -- # keyid=2 00:22:13.731 22:13:55 -- host/auth.sh@45 -- # key=DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:13.731 22:13:55 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:13.731 22:13:55 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:13.731 22:13:55 -- host/auth.sh@49 -- # echo DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:13.731 22:13:55 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:22:13.731 22:13:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:13.731 22:13:55 -- host/auth.sh@68 -- # digest=sha256 00:22:13.731 22:13:55 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:13.731 22:13:55 -- host/auth.sh@68 -- # keyid=2 00:22:13.731 22:13:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:13.731 22:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:13.731 22:13:55 -- common/autotest_common.sh@10 -- # set +x 00:22:13.731 22:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:13.731 22:13:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:13.731 22:13:55 -- nvmf/common.sh@717 -- # local ip 00:22:13.731 22:13:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:13.731 22:13:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:13.731 22:13:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:13.731 22:13:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:13.731 22:13:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:13.731 22:13:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:13.731 22:13:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:13.731 22:13:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:13.731 22:13:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:13.731 22:13:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:13.731 22:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:13.731 22:13:55 -- common/autotest_common.sh@10 -- # set +x 00:22:14.038 nvme0n1 00:22:14.038 22:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.038 22:13:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:14.038 22:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.038 22:13:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:14.038 22:13:56 -- common/autotest_common.sh@10 -- # set +x 00:22:14.038 22:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.038 22:13:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.038 22:13:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:14.038 22:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.038 22:13:56 -- common/autotest_common.sh@10 -- # set +x 00:22:14.038 22:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.038 22:13:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:14.038 22:13:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:22:14.038 22:13:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:14.038 22:13:56 -- host/auth.sh@44 -- # digest=sha256 00:22:14.038 22:13:56 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:14.038 22:13:56 -- host/auth.sh@44 -- # keyid=3 00:22:14.038 22:13:56 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:14.038 22:13:56 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:14.038 22:13:56 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:14.038 22:13:56 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:14.038 22:13:56 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:22:14.038 22:13:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:14.038 22:13:56 -- host/auth.sh@68 -- # digest=sha256 00:22:14.038 22:13:56 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:14.038 22:13:56 -- host/auth.sh@68 -- # keyid=3 00:22:14.038 22:13:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:14.038 22:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.038 22:13:56 -- common/autotest_common.sh@10 -- # set +x 00:22:14.038 22:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.038 22:13:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:14.038 22:13:56 -- nvmf/common.sh@717 -- # local ip 00:22:14.038 22:13:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:14.038 22:13:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:14.038 22:13:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:14.038 22:13:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:14.038 22:13:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:14.038 22:13:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:14.038 22:13:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:14.038 22:13:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:14.038 22:13:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:14.038 22:13:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:14.038 22:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.038 22:13:56 -- common/autotest_common.sh@10 -- # set +x 00:22:14.038 nvme0n1 00:22:14.038 22:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.038 22:13:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:14.038 22:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.038 22:13:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:14.038 22:13:56 -- common/autotest_common.sh@10 -- # set +x 00:22:14.038 22:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.038 22:13:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.038 22:13:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:14.038 22:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.038 22:13:56 -- common/autotest_common.sh@10 -- # set +x 00:22:14.301 22:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.301 22:13:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:14.301 22:13:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:22:14.301 22:13:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:14.301 22:13:56 -- host/auth.sh@44 -- # digest=sha256 00:22:14.301 22:13:56 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:14.301 22:13:56 -- host/auth.sh@44 -- # keyid=4 00:22:14.301 22:13:56 -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:14.301 22:13:56 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:14.301 22:13:56 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:14.301 22:13:56 -- host/auth.sh@49 -- # echo DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:14.301 22:13:56 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:22:14.301 22:13:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:14.301 22:13:56 -- host/auth.sh@68 -- # digest=sha256 00:22:14.301 22:13:56 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:14.301 22:13:56 -- host/auth.sh@68 -- # keyid=4 00:22:14.301 22:13:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:14.301 22:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.301 22:13:56 -- common/autotest_common.sh@10 -- # set +x 00:22:14.301 22:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.301 22:13:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:14.301 22:13:56 -- nvmf/common.sh@717 -- # local ip 00:22:14.301 22:13:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:14.301 22:13:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:14.301 22:13:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:14.301 22:13:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:14.301 22:13:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:14.301 22:13:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:14.301 22:13:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:14.301 22:13:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:14.301 22:13:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:14.301 22:13:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:14.302 22:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.302 22:13:56 -- common/autotest_common.sh@10 -- # set +x 00:22:14.302 nvme0n1 00:22:14.302 22:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.302 22:13:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:14.302 22:13:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:14.302 22:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.302 22:13:56 -- common/autotest_common.sh@10 -- # set +x 00:22:14.302 22:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.302 22:13:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.302 22:13:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:14.302 22:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.302 22:13:56 -- common/autotest_common.sh@10 -- # set +x 00:22:14.302 22:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.302 22:13:56 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:14.302 22:13:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:14.302 22:13:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:22:14.302 22:13:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:14.302 22:13:56 -- host/auth.sh@44 -- # digest=sha256 00:22:14.302 22:13:56 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:14.302 22:13:56 -- host/auth.sh@44 -- # keyid=0 00:22:14.302 22:13:56 -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:14.302 22:13:56 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:14.302 22:13:56 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:14.302 22:13:56 -- host/auth.sh@49 -- # echo DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:14.302 22:13:56 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:22:14.302 22:13:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:14.302 22:13:56 -- host/auth.sh@68 -- # digest=sha256 00:22:14.302 22:13:56 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:14.302 22:13:56 -- host/auth.sh@68 -- # keyid=0 00:22:14.302 22:13:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:14.302 22:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.302 22:13:56 -- common/autotest_common.sh@10 -- # set +x 00:22:14.302 22:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.302 22:13:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:14.302 22:13:56 -- nvmf/common.sh@717 -- # local ip 00:22:14.302 22:13:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:14.302 22:13:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:14.302 22:13:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:14.302 22:13:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:14.302 22:13:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:14.302 22:13:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:14.302 22:13:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:14.302 22:13:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:14.302 22:13:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:14.302 22:13:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:14.302 22:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.302 22:13:56 -- common/autotest_common.sh@10 -- # set +x 00:22:14.560 nvme0n1 00:22:14.560 22:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.560 22:13:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:14.560 22:13:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:14.560 22:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.560 22:13:56 -- common/autotest_common.sh@10 -- # set +x 00:22:14.560 22:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.560 22:13:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.560 22:13:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:14.560 22:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.560 22:13:56 -- common/autotest_common.sh@10 -- # set +x 00:22:14.560 22:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.560 22:13:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:14.560 22:13:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:22:14.560 22:13:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:14.560 22:13:56 -- host/auth.sh@44 -- # digest=sha256 00:22:14.560 22:13:56 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:14.560 22:13:56 -- host/auth.sh@44 -- # keyid=1 00:22:14.560 22:13:56 -- host/auth.sh@45 -- # key=DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:14.560 22:13:56 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:14.560 22:13:56 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:14.560 22:13:56 -- host/auth.sh@49 -- # echo DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:14.560 22:13:56 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:22:14.560 22:13:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:14.560 22:13:56 -- host/auth.sh@68 -- # digest=sha256 00:22:14.560 22:13:56 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:14.560 22:13:56 -- host/auth.sh@68 -- # keyid=1 00:22:14.560 22:13:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:14.560 22:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.560 22:13:56 -- common/autotest_common.sh@10 -- # set +x 00:22:14.560 22:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.560 22:13:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:14.560 22:13:56 -- nvmf/common.sh@717 -- # local ip 00:22:14.560 22:13:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:14.560 22:13:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:14.560 22:13:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:14.560 22:13:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:14.560 22:13:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:14.560 22:13:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:14.560 22:13:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:14.560 22:13:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:14.560 22:13:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:14.560 22:13:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:14.560 22:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.560 22:13:56 -- common/autotest_common.sh@10 -- # set +x 00:22:14.818 nvme0n1 00:22:14.818 22:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.818 22:13:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:14.818 22:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.818 22:13:56 -- common/autotest_common.sh@10 -- # set +x 00:22:14.818 22:13:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:14.818 22:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.818 22:13:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.818 22:13:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:14.818 22:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.818 22:13:56 -- common/autotest_common.sh@10 -- # set +x 00:22:14.818 22:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.818 22:13:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:14.818 22:13:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:22:14.818 22:13:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:14.818 22:13:56 -- host/auth.sh@44 -- # digest=sha256 00:22:14.818 22:13:56 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:14.818 22:13:56 -- host/auth.sh@44 -- # keyid=2 00:22:14.818 22:13:56 -- host/auth.sh@45 -- # key=DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:14.818 22:13:56 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:14.818 22:13:56 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:14.818 22:13:56 -- host/auth.sh@49 -- # echo DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:14.818 22:13:56 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:22:14.818 22:13:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:14.818 22:13:56 -- host/auth.sh@68 -- # digest=sha256 00:22:14.818 22:13:56 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:14.818 22:13:56 -- host/auth.sh@68 -- # keyid=2 00:22:14.818 22:13:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:14.818 22:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.818 22:13:56 -- common/autotest_common.sh@10 -- # set +x 00:22:14.819 22:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.819 22:13:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:14.819 22:13:56 -- nvmf/common.sh@717 -- # local ip 00:22:14.819 22:13:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:14.819 22:13:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:14.819 22:13:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:14.819 22:13:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:14.819 22:13:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:14.819 22:13:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:14.819 22:13:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:14.819 22:13:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:14.819 22:13:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:14.819 22:13:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:14.819 22:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.819 22:13:56 -- common/autotest_common.sh@10 -- # set +x 00:22:15.076 nvme0n1 00:22:15.076 22:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.076 22:13:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:15.076 22:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.076 22:13:57 -- common/autotest_common.sh@10 -- # set +x 00:22:15.076 22:13:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:15.076 22:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.076 22:13:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.076 22:13:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:15.076 22:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.076 22:13:57 -- common/autotest_common.sh@10 -- # set +x 00:22:15.076 22:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.076 22:13:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:15.076 22:13:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:22:15.076 22:13:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:15.076 22:13:57 -- host/auth.sh@44 -- # digest=sha256 00:22:15.076 22:13:57 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:15.076 22:13:57 -- host/auth.sh@44 -- # keyid=3 00:22:15.076 22:13:57 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:15.076 22:13:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:15.076 22:13:57 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:15.076 22:13:57 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:15.076 22:13:57 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:22:15.076 22:13:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:15.076 22:13:57 -- host/auth.sh@68 -- # digest=sha256 00:22:15.076 22:13:57 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:15.076 22:13:57 -- host/auth.sh@68 -- # keyid=3 00:22:15.076 22:13:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:15.076 22:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.076 22:13:57 -- common/autotest_common.sh@10 -- # set +x 00:22:15.076 22:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.076 22:13:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:15.076 22:13:57 -- nvmf/common.sh@717 -- # local ip 00:22:15.076 22:13:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:15.076 22:13:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:15.076 22:13:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:15.076 22:13:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:15.076 22:13:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:15.076 22:13:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:15.076 22:13:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:15.076 22:13:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:15.076 22:13:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:15.076 22:13:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:15.076 22:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.076 22:13:57 -- common/autotest_common.sh@10 -- # set +x 00:22:15.333 nvme0n1 00:22:15.333 22:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.333 22:13:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:15.333 22:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.333 22:13:57 -- common/autotest_common.sh@10 -- # set +x 00:22:15.333 22:13:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:15.333 22:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.333 22:13:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.333 22:13:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:15.333 22:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.333 22:13:57 -- common/autotest_common.sh@10 -- # set +x 00:22:15.333 22:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.333 22:13:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:15.333 22:13:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:22:15.333 22:13:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:15.333 22:13:57 -- host/auth.sh@44 -- # digest=sha256 00:22:15.333 22:13:57 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:15.333 22:13:57 -- host/auth.sh@44 -- # keyid=4 00:22:15.333 22:13:57 -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:15.333 22:13:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:15.333 22:13:57 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:15.333 22:13:57 -- host/auth.sh@49 -- # echo DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:15.333 22:13:57 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:22:15.333 22:13:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:15.333 22:13:57 -- host/auth.sh@68 -- # digest=sha256 00:22:15.333 22:13:57 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:15.333 22:13:57 -- host/auth.sh@68 -- # keyid=4 00:22:15.333 22:13:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:15.333 22:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.333 22:13:57 -- common/autotest_common.sh@10 -- # set +x 00:22:15.333 22:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.333 22:13:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:15.333 22:13:57 -- nvmf/common.sh@717 -- # local ip 00:22:15.333 22:13:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:15.333 22:13:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:15.333 22:13:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:15.333 22:13:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:15.333 22:13:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:15.333 22:13:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:15.333 22:13:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:15.333 22:13:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:15.333 22:13:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:15.333 22:13:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:15.333 22:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.333 22:13:57 -- common/autotest_common.sh@10 -- # set +x 00:22:15.590 nvme0n1 00:22:15.590 22:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.590 22:13:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:15.590 22:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.590 22:13:57 -- common/autotest_common.sh@10 -- # set +x 00:22:15.590 22:13:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:15.590 22:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.590 22:13:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.590 22:13:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:15.590 22:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.590 22:13:57 -- common/autotest_common.sh@10 -- # set +x 00:22:15.590 22:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.590 22:13:57 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:15.590 22:13:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:15.590 22:13:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:22:15.590 22:13:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:15.590 22:13:57 -- host/auth.sh@44 -- # digest=sha256 00:22:15.590 22:13:57 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:15.590 22:13:57 -- host/auth.sh@44 -- # keyid=0 00:22:15.590 22:13:57 -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:15.591 22:13:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:15.591 22:13:57 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:15.591 22:13:57 -- host/auth.sh@49 -- # echo DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:15.591 22:13:57 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:22:15.591 22:13:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:15.591 22:13:57 -- host/auth.sh@68 -- # digest=sha256 00:22:15.591 22:13:57 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:15.591 22:13:57 -- host/auth.sh@68 -- # keyid=0 00:22:15.591 22:13:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:15.591 22:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.591 22:13:57 -- common/autotest_common.sh@10 -- # set +x 00:22:15.591 22:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.591 22:13:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:15.591 22:13:57 -- nvmf/common.sh@717 -- # local ip 00:22:15.591 22:13:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:15.591 22:13:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:15.591 22:13:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:15.591 22:13:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:15.591 22:13:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:15.591 22:13:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:15.591 22:13:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:15.591 22:13:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:15.591 22:13:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:15.591 22:13:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:15.591 22:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.591 22:13:57 -- common/autotest_common.sh@10 -- # set +x 00:22:15.847 nvme0n1 00:22:15.847 22:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.847 22:13:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:15.847 22:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.847 22:13:58 -- common/autotest_common.sh@10 -- # set +x 00:22:15.847 22:13:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:15.847 22:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.103 22:13:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.103 22:13:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:16.103 22:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.103 22:13:58 -- common/autotest_common.sh@10 -- # set +x 00:22:16.103 22:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.103 22:13:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:16.103 22:13:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:22:16.103 22:13:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:16.103 22:13:58 -- host/auth.sh@44 -- # digest=sha256 00:22:16.103 22:13:58 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:16.103 22:13:58 -- host/auth.sh@44 -- # keyid=1 00:22:16.103 22:13:58 -- host/auth.sh@45 -- # key=DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:16.103 22:13:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:16.103 22:13:58 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:16.103 22:13:58 -- host/auth.sh@49 -- # echo DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:16.103 22:13:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:22:16.103 22:13:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:16.103 22:13:58 -- host/auth.sh@68 -- # digest=sha256 00:22:16.103 22:13:58 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:16.103 22:13:58 -- host/auth.sh@68 -- # keyid=1 00:22:16.103 22:13:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:16.104 22:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.104 22:13:58 -- common/autotest_common.sh@10 -- # set +x 00:22:16.104 22:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.104 22:13:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:16.104 22:13:58 -- nvmf/common.sh@717 -- # local ip 00:22:16.104 22:13:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:16.104 22:13:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:16.104 22:13:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:16.104 22:13:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:16.104 22:13:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:16.104 22:13:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:16.104 22:13:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:16.104 22:13:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:16.104 22:13:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:16.104 22:13:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:16.104 22:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.104 22:13:58 -- common/autotest_common.sh@10 -- # set +x 00:22:16.360 nvme0n1 00:22:16.360 22:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.360 22:13:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:16.361 22:13:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:16.361 22:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.361 22:13:58 -- common/autotest_common.sh@10 -- # set +x 00:22:16.361 22:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.361 22:13:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.361 22:13:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:16.361 22:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.361 22:13:58 -- common/autotest_common.sh@10 -- # set +x 00:22:16.361 22:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.361 22:13:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:16.361 22:13:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:22:16.361 22:13:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:16.361 22:13:58 -- host/auth.sh@44 -- # digest=sha256 00:22:16.361 22:13:58 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:16.361 22:13:58 -- host/auth.sh@44 -- # keyid=2 00:22:16.361 22:13:58 -- host/auth.sh@45 -- # key=DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:16.361 22:13:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:16.361 22:13:58 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:16.361 22:13:58 -- host/auth.sh@49 -- # echo DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:16.361 22:13:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:22:16.361 22:13:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:16.361 22:13:58 -- host/auth.sh@68 -- # digest=sha256 00:22:16.361 22:13:58 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:16.361 22:13:58 -- host/auth.sh@68 -- # keyid=2 00:22:16.361 22:13:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:16.361 22:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.361 22:13:58 -- common/autotest_common.sh@10 -- # set +x 00:22:16.361 22:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.361 22:13:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:16.361 22:13:58 -- nvmf/common.sh@717 -- # local ip 00:22:16.361 22:13:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:16.361 22:13:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:16.361 22:13:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:16.361 22:13:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:16.361 22:13:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:16.361 22:13:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:16.361 22:13:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:16.361 22:13:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:16.361 22:13:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:16.361 22:13:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:16.361 22:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.361 22:13:58 -- common/autotest_common.sh@10 -- # set +x 00:22:16.618 nvme0n1 00:22:16.618 22:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.618 22:13:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:16.618 22:13:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:16.618 22:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.618 22:13:58 -- common/autotest_common.sh@10 -- # set +x 00:22:16.618 22:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.876 22:13:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.876 22:13:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:16.876 22:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.876 22:13:58 -- common/autotest_common.sh@10 -- # set +x 00:22:16.876 22:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.876 22:13:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:16.876 22:13:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:22:16.876 22:13:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:16.876 22:13:58 -- host/auth.sh@44 -- # digest=sha256 00:22:16.876 22:13:58 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:16.876 22:13:58 -- host/auth.sh@44 -- # keyid=3 00:22:16.876 22:13:58 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:16.876 22:13:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:16.876 22:13:58 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:16.876 22:13:58 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:16.876 22:13:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:22:16.876 22:13:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:16.876 22:13:58 -- host/auth.sh@68 -- # digest=sha256 00:22:16.876 22:13:58 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:16.876 22:13:58 -- host/auth.sh@68 -- # keyid=3 00:22:16.876 22:13:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:16.876 22:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.876 22:13:58 -- common/autotest_common.sh@10 -- # set +x 00:22:16.876 22:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.876 22:13:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:16.876 22:13:58 -- nvmf/common.sh@717 -- # local ip 00:22:16.876 22:13:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:16.876 22:13:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:16.876 22:13:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:16.876 22:13:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:16.876 22:13:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:16.876 22:13:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:16.876 22:13:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:16.876 22:13:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:16.876 22:13:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:16.876 22:13:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:16.876 22:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.876 22:13:58 -- common/autotest_common.sh@10 -- # set +x 00:22:17.134 nvme0n1 00:22:17.134 22:13:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.134 22:13:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:17.134 22:13:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.134 22:13:59 -- common/autotest_common.sh@10 -- # set +x 00:22:17.134 22:13:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:17.134 22:13:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.134 22:13:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.134 22:13:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:17.134 22:13:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.134 22:13:59 -- common/autotest_common.sh@10 -- # set +x 00:22:17.134 22:13:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.134 22:13:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:17.134 22:13:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:22:17.134 22:13:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:17.134 22:13:59 -- host/auth.sh@44 -- # digest=sha256 00:22:17.134 22:13:59 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:17.134 22:13:59 -- host/auth.sh@44 -- # keyid=4 00:22:17.134 22:13:59 -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:17.134 22:13:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:17.134 22:13:59 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:17.134 22:13:59 -- host/auth.sh@49 -- # echo DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:17.134 22:13:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:22:17.134 22:13:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:17.134 22:13:59 -- host/auth.sh@68 -- # digest=sha256 00:22:17.134 22:13:59 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:17.134 22:13:59 -- host/auth.sh@68 -- # keyid=4 00:22:17.134 22:13:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:17.134 22:13:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.134 22:13:59 -- common/autotest_common.sh@10 -- # set +x 00:22:17.134 22:13:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.134 22:13:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:17.134 22:13:59 -- nvmf/common.sh@717 -- # local ip 00:22:17.134 22:13:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:17.134 22:13:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:17.134 22:13:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:17.134 22:13:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:17.134 22:13:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:17.134 22:13:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:17.134 22:13:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:17.134 22:13:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:17.134 22:13:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:17.134 22:13:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:17.134 22:13:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.134 22:13:59 -- common/autotest_common.sh@10 -- # set +x 00:22:17.392 nvme0n1 00:22:17.392 22:13:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.392 22:13:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:17.392 22:13:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.392 22:13:59 -- common/autotest_common.sh@10 -- # set +x 00:22:17.392 22:13:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:17.392 22:13:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.392 22:13:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.392 22:13:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:17.392 22:13:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.392 22:13:59 -- common/autotest_common.sh@10 -- # set +x 00:22:17.392 22:13:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.392 22:13:59 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:17.392 22:13:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:17.392 22:13:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:22:17.392 22:13:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:17.392 22:13:59 -- host/auth.sh@44 -- # digest=sha256 00:22:17.392 22:13:59 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:17.392 22:13:59 -- host/auth.sh@44 -- # keyid=0 00:22:17.392 22:13:59 -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:17.392 22:13:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:17.392 22:13:59 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:17.392 22:13:59 -- host/auth.sh@49 -- # echo DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:17.392 22:13:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:22:17.392 22:13:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:17.392 22:13:59 -- host/auth.sh@68 -- # digest=sha256 00:22:17.392 22:13:59 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:17.392 22:13:59 -- host/auth.sh@68 -- # keyid=0 00:22:17.392 22:13:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:17.392 22:13:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.392 22:13:59 -- common/autotest_common.sh@10 -- # set +x 00:22:17.392 22:13:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.392 22:13:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:17.392 22:13:59 -- nvmf/common.sh@717 -- # local ip 00:22:17.392 22:13:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:17.392 22:13:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:17.392 22:13:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:17.392 22:13:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:17.392 22:13:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:17.392 22:13:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:17.392 22:13:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:17.392 22:13:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:17.392 22:13:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:17.392 22:13:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:17.392 22:13:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.392 22:13:59 -- common/autotest_common.sh@10 -- # set +x 00:22:17.957 nvme0n1 00:22:17.957 22:14:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.957 22:14:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:17.957 22:14:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:17.957 22:14:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.957 22:14:00 -- common/autotest_common.sh@10 -- # set +x 00:22:17.957 22:14:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:18.215 22:14:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.215 22:14:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:18.215 22:14:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:18.215 22:14:00 -- common/autotest_common.sh@10 -- # set +x 00:22:18.215 22:14:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:18.215 22:14:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:18.215 22:14:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:22:18.215 22:14:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:18.215 22:14:00 -- host/auth.sh@44 -- # digest=sha256 00:22:18.215 22:14:00 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:18.215 22:14:00 -- host/auth.sh@44 -- # keyid=1 00:22:18.215 22:14:00 -- host/auth.sh@45 -- # key=DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:18.215 22:14:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:18.215 22:14:00 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:18.215 22:14:00 -- host/auth.sh@49 -- # echo DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:18.215 22:14:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:22:18.215 22:14:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:18.215 22:14:00 -- host/auth.sh@68 -- # digest=sha256 00:22:18.215 22:14:00 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:18.215 22:14:00 -- host/auth.sh@68 -- # keyid=1 00:22:18.215 22:14:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:18.215 22:14:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:18.215 22:14:00 -- common/autotest_common.sh@10 -- # set +x 00:22:18.215 22:14:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:18.215 22:14:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:18.215 22:14:00 -- nvmf/common.sh@717 -- # local ip 00:22:18.215 22:14:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:18.215 22:14:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:18.215 22:14:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:18.215 22:14:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:18.215 22:14:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:18.215 22:14:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:18.215 22:14:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:18.215 22:14:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:18.215 22:14:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:18.215 22:14:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:18.215 22:14:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:18.215 22:14:00 -- common/autotest_common.sh@10 -- # set +x 00:22:18.780 nvme0n1 00:22:18.780 22:14:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:18.780 22:14:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:18.780 22:14:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:18.780 22:14:00 -- common/autotest_common.sh@10 -- # set +x 00:22:18.780 22:14:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:18.780 22:14:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:18.780 22:14:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.780 22:14:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:18.780 22:14:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:18.780 22:14:00 -- common/autotest_common.sh@10 -- # set +x 00:22:18.780 22:14:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:18.780 22:14:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:18.780 22:14:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:22:18.780 22:14:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:18.780 22:14:00 -- host/auth.sh@44 -- # digest=sha256 00:22:18.780 22:14:00 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:18.780 22:14:00 -- host/auth.sh@44 -- # keyid=2 00:22:18.780 22:14:00 -- host/auth.sh@45 -- # key=DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:18.780 22:14:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:18.780 22:14:00 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:18.780 22:14:00 -- host/auth.sh@49 -- # echo DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:18.780 22:14:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:22:18.780 22:14:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:18.780 22:14:00 -- host/auth.sh@68 -- # digest=sha256 00:22:18.780 22:14:00 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:18.780 22:14:00 -- host/auth.sh@68 -- # keyid=2 00:22:18.780 22:14:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:18.780 22:14:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:18.780 22:14:00 -- common/autotest_common.sh@10 -- # set +x 00:22:18.780 22:14:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:18.780 22:14:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:18.780 22:14:00 -- nvmf/common.sh@717 -- # local ip 00:22:18.780 22:14:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:18.780 22:14:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:18.780 22:14:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:18.780 22:14:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:18.780 22:14:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:18.780 22:14:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:18.780 22:14:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:18.780 22:14:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:18.780 22:14:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:18.780 22:14:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:18.780 22:14:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:18.780 22:14:00 -- common/autotest_common.sh@10 -- # set +x 00:22:19.343 nvme0n1 00:22:19.343 22:14:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.343 22:14:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:19.343 22:14:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.343 22:14:01 -- common/autotest_common.sh@10 -- # set +x 00:22:19.343 22:14:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:19.343 22:14:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.343 22:14:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.344 22:14:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:19.344 22:14:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.344 22:14:01 -- common/autotest_common.sh@10 -- # set +x 00:22:19.344 22:14:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.344 22:14:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:19.344 22:14:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:22:19.344 22:14:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:19.344 22:14:01 -- host/auth.sh@44 -- # digest=sha256 00:22:19.344 22:14:01 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:19.344 22:14:01 -- host/auth.sh@44 -- # keyid=3 00:22:19.344 22:14:01 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:19.344 22:14:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:19.344 22:14:01 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:19.344 22:14:01 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:19.344 22:14:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:22:19.344 22:14:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:19.344 22:14:01 -- host/auth.sh@68 -- # digest=sha256 00:22:19.344 22:14:01 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:19.344 22:14:01 -- host/auth.sh@68 -- # keyid=3 00:22:19.344 22:14:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:19.344 22:14:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.344 22:14:01 -- common/autotest_common.sh@10 -- # set +x 00:22:19.344 22:14:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.344 22:14:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:19.344 22:14:01 -- nvmf/common.sh@717 -- # local ip 00:22:19.344 22:14:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:19.344 22:14:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:19.344 22:14:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:19.344 22:14:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:19.344 22:14:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:19.344 22:14:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:19.344 22:14:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:19.344 22:14:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:19.344 22:14:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:19.344 22:14:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:19.344 22:14:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.344 22:14:01 -- common/autotest_common.sh@10 -- # set +x 00:22:20.275 nvme0n1 00:22:20.275 22:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.275 22:14:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.275 22:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.275 22:14:02 -- common/autotest_common.sh@10 -- # set +x 00:22:20.275 22:14:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:20.275 22:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.275 22:14:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.275 22:14:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.275 22:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.275 22:14:02 -- common/autotest_common.sh@10 -- # set +x 00:22:20.275 22:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.275 22:14:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:20.275 22:14:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:22:20.275 22:14:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:20.275 22:14:02 -- host/auth.sh@44 -- # digest=sha256 00:22:20.275 22:14:02 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:20.275 22:14:02 -- host/auth.sh@44 -- # keyid=4 00:22:20.275 22:14:02 -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:20.275 22:14:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:20.275 22:14:02 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:20.275 22:14:02 -- host/auth.sh@49 -- # echo DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:20.275 22:14:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:22:20.275 22:14:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:20.275 22:14:02 -- host/auth.sh@68 -- # digest=sha256 00:22:20.275 22:14:02 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:20.275 22:14:02 -- host/auth.sh@68 -- # keyid=4 00:22:20.275 22:14:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:20.275 22:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.275 22:14:02 -- common/autotest_common.sh@10 -- # set +x 00:22:20.275 22:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.275 22:14:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:20.275 22:14:02 -- nvmf/common.sh@717 -- # local ip 00:22:20.275 22:14:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:20.275 22:14:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:20.275 22:14:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:20.275 22:14:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:20.275 22:14:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:20.275 22:14:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:20.275 22:14:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:20.275 22:14:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:20.275 22:14:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:20.275 22:14:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:20.275 22:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.275 22:14:02 -- common/autotest_common.sh@10 -- # set +x 00:22:20.840 nvme0n1 00:22:20.840 22:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.840 22:14:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.840 22:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.840 22:14:02 -- common/autotest_common.sh@10 -- # set +x 00:22:20.840 22:14:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:20.840 22:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.840 22:14:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.840 22:14:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.840 22:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.840 22:14:02 -- common/autotest_common.sh@10 -- # set +x 00:22:20.840 22:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.840 22:14:02 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:20.840 22:14:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:20.840 22:14:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:22:20.840 22:14:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:20.840 22:14:02 -- host/auth.sh@44 -- # digest=sha256 00:22:20.840 22:14:02 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:20.840 22:14:02 -- host/auth.sh@44 -- # keyid=0 00:22:20.840 22:14:02 -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:20.840 22:14:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:20.840 22:14:02 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:20.840 22:14:02 -- host/auth.sh@49 -- # echo DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:20.840 22:14:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:22:20.840 22:14:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:20.840 22:14:02 -- host/auth.sh@68 -- # digest=sha256 00:22:20.840 22:14:02 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:20.840 22:14:02 -- host/auth.sh@68 -- # keyid=0 00:22:20.840 22:14:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:20.840 22:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.840 22:14:02 -- common/autotest_common.sh@10 -- # set +x 00:22:20.840 22:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.840 22:14:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:20.840 22:14:02 -- nvmf/common.sh@717 -- # local ip 00:22:20.840 22:14:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:20.840 22:14:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:20.840 22:14:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:20.840 22:14:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:20.840 22:14:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:20.840 22:14:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:20.840 22:14:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:20.840 22:14:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:20.840 22:14:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:20.840 22:14:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:20.840 22:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.840 22:14:02 -- common/autotest_common.sh@10 -- # set +x 00:22:21.773 nvme0n1 00:22:21.773 22:14:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.773 22:14:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.773 22:14:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.773 22:14:03 -- common/autotest_common.sh@10 -- # set +x 00:22:21.773 22:14:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:21.773 22:14:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.031 22:14:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.031 22:14:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.031 22:14:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.031 22:14:04 -- common/autotest_common.sh@10 -- # set +x 00:22:22.031 22:14:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.031 22:14:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:22.031 22:14:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:22:22.031 22:14:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:22.031 22:14:04 -- host/auth.sh@44 -- # digest=sha256 00:22:22.031 22:14:04 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:22.031 22:14:04 -- host/auth.sh@44 -- # keyid=1 00:22:22.031 22:14:04 -- host/auth.sh@45 -- # key=DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:22.031 22:14:04 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:22.031 22:14:04 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:22.031 22:14:04 -- host/auth.sh@49 -- # echo DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:22.031 22:14:04 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:22:22.031 22:14:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:22.031 22:14:04 -- host/auth.sh@68 -- # digest=sha256 00:22:22.031 22:14:04 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:22.031 22:14:04 -- host/auth.sh@68 -- # keyid=1 00:22:22.031 22:14:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:22.031 22:14:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.031 22:14:04 -- common/autotest_common.sh@10 -- # set +x 00:22:22.031 22:14:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.031 22:14:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:22.031 22:14:04 -- nvmf/common.sh@717 -- # local ip 00:22:22.031 22:14:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:22.031 22:14:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:22.031 22:14:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:22.031 22:14:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:22.031 22:14:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:22.031 22:14:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:22.031 22:14:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:22.031 22:14:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:22.031 22:14:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:22.031 22:14:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:22.031 22:14:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.031 22:14:04 -- common/autotest_common.sh@10 -- # set +x 00:22:22.964 nvme0n1 00:22:22.964 22:14:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.964 22:14:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.964 22:14:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.964 22:14:05 -- common/autotest_common.sh@10 -- # set +x 00:22:22.964 22:14:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:22.964 22:14:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.964 22:14:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.964 22:14:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.964 22:14:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.964 22:14:05 -- common/autotest_common.sh@10 -- # set +x 00:22:22.964 22:14:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.964 22:14:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:22.964 22:14:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:22:22.964 22:14:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:22.964 22:14:05 -- host/auth.sh@44 -- # digest=sha256 00:22:22.964 22:14:05 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:22.964 22:14:05 -- host/auth.sh@44 -- # keyid=2 00:22:22.964 22:14:05 -- host/auth.sh@45 -- # key=DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:22.964 22:14:05 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:22.964 22:14:05 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:22.964 22:14:05 -- host/auth.sh@49 -- # echo DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:22.964 22:14:05 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:22:22.964 22:14:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:22.964 22:14:05 -- host/auth.sh@68 -- # digest=sha256 00:22:22.964 22:14:05 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:22.964 22:14:05 -- host/auth.sh@68 -- # keyid=2 00:22:22.964 22:14:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:22.964 22:14:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.964 22:14:05 -- common/autotest_common.sh@10 -- # set +x 00:22:22.964 22:14:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.964 22:14:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:22.964 22:14:05 -- nvmf/common.sh@717 -- # local ip 00:22:22.964 22:14:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:22.964 22:14:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:22.964 22:14:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:22.964 22:14:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:22.964 22:14:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:22.964 22:14:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:22.964 22:14:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:22.964 22:14:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:22.964 22:14:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:22.964 22:14:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:22.964 22:14:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.964 22:14:05 -- common/autotest_common.sh@10 -- # set +x 00:22:23.897 nvme0n1 00:22:23.897 22:14:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.897 22:14:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.897 22:14:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:23.897 22:14:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.897 22:14:06 -- common/autotest_common.sh@10 -- # set +x 00:22:23.897 22:14:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.155 22:14:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.155 22:14:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.155 22:14:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.155 22:14:06 -- common/autotest_common.sh@10 -- # set +x 00:22:24.155 22:14:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.155 22:14:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:24.155 22:14:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:22:24.155 22:14:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:24.155 22:14:06 -- host/auth.sh@44 -- # digest=sha256 00:22:24.155 22:14:06 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:24.155 22:14:06 -- host/auth.sh@44 -- # keyid=3 00:22:24.155 22:14:06 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:24.155 22:14:06 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:24.155 22:14:06 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:24.155 22:14:06 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:24.155 22:14:06 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:22:24.155 22:14:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:24.155 22:14:06 -- host/auth.sh@68 -- # digest=sha256 00:22:24.155 22:14:06 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:24.155 22:14:06 -- host/auth.sh@68 -- # keyid=3 00:22:24.155 22:14:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:24.155 22:14:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.155 22:14:06 -- common/autotest_common.sh@10 -- # set +x 00:22:24.155 22:14:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.155 22:14:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:24.155 22:14:06 -- nvmf/common.sh@717 -- # local ip 00:22:24.155 22:14:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:24.155 22:14:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:24.155 22:14:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.155 22:14:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.155 22:14:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:24.155 22:14:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:24.155 22:14:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:24.155 22:14:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:24.155 22:14:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:24.155 22:14:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:24.155 22:14:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.155 22:14:06 -- common/autotest_common.sh@10 -- # set +x 00:22:25.090 nvme0n1 00:22:25.090 22:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.090 22:14:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:25.090 22:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.090 22:14:07 -- common/autotest_common.sh@10 -- # set +x 00:22:25.090 22:14:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:25.090 22:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.090 22:14:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.090 22:14:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:25.090 22:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.090 22:14:07 -- common/autotest_common.sh@10 -- # set +x 00:22:25.348 22:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.348 22:14:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:25.348 22:14:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:22:25.348 22:14:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:25.348 22:14:07 -- host/auth.sh@44 -- # digest=sha256 00:22:25.348 22:14:07 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:25.348 22:14:07 -- host/auth.sh@44 -- # keyid=4 00:22:25.348 22:14:07 -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:25.348 22:14:07 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:25.348 22:14:07 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:25.348 22:14:07 -- host/auth.sh@49 -- # echo DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:25.348 22:14:07 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:22:25.348 22:14:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:25.348 22:14:07 -- host/auth.sh@68 -- # digest=sha256 00:22:25.348 22:14:07 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:25.348 22:14:07 -- host/auth.sh@68 -- # keyid=4 00:22:25.348 22:14:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:25.348 22:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.348 22:14:07 -- common/autotest_common.sh@10 -- # set +x 00:22:25.348 22:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.348 22:14:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:25.348 22:14:07 -- nvmf/common.sh@717 -- # local ip 00:22:25.348 22:14:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:25.348 22:14:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:25.348 22:14:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:25.348 22:14:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:25.348 22:14:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:25.348 22:14:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:25.348 22:14:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:25.348 22:14:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:25.348 22:14:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:25.348 22:14:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:25.348 22:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.348 22:14:07 -- common/autotest_common.sh@10 -- # set +x 00:22:26.304 nvme0n1 00:22:26.304 22:14:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.304 22:14:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:26.304 22:14:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:26.304 22:14:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.304 22:14:08 -- common/autotest_common.sh@10 -- # set +x 00:22:26.304 22:14:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.304 22:14:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.304 22:14:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:26.304 22:14:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.304 22:14:08 -- common/autotest_common.sh@10 -- # set +x 00:22:26.304 22:14:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.304 22:14:08 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:22:26.304 22:14:08 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:26.304 22:14:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:26.304 22:14:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:22:26.304 22:14:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:26.304 22:14:08 -- host/auth.sh@44 -- # digest=sha384 00:22:26.304 22:14:08 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:26.304 22:14:08 -- host/auth.sh@44 -- # keyid=0 00:22:26.304 22:14:08 -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:26.304 22:14:08 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:26.304 22:14:08 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:26.304 22:14:08 -- host/auth.sh@49 -- # echo DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:26.304 22:14:08 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:22:26.304 22:14:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:26.304 22:14:08 -- host/auth.sh@68 -- # digest=sha384 00:22:26.304 22:14:08 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:26.304 22:14:08 -- host/auth.sh@68 -- # keyid=0 00:22:26.304 22:14:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:26.304 22:14:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.304 22:14:08 -- common/autotest_common.sh@10 -- # set +x 00:22:26.304 22:14:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.304 22:14:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:26.304 22:14:08 -- nvmf/common.sh@717 -- # local ip 00:22:26.304 22:14:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:26.304 22:14:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:26.304 22:14:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:26.304 22:14:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:26.304 22:14:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:26.304 22:14:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:26.304 22:14:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:26.304 22:14:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:26.304 22:14:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:26.304 22:14:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:26.304 22:14:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.304 22:14:08 -- common/autotest_common.sh@10 -- # set +x 00:22:26.304 nvme0n1 00:22:26.304 22:14:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.304 22:14:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:26.304 22:14:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.304 22:14:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:26.304 22:14:08 -- common/autotest_common.sh@10 -- # set +x 00:22:26.304 22:14:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.563 22:14:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.563 22:14:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:26.563 22:14:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.563 22:14:08 -- common/autotest_common.sh@10 -- # set +x 00:22:26.563 22:14:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.563 22:14:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:26.563 22:14:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:22:26.563 22:14:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:26.563 22:14:08 -- host/auth.sh@44 -- # digest=sha384 00:22:26.563 22:14:08 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:26.563 22:14:08 -- host/auth.sh@44 -- # keyid=1 00:22:26.563 22:14:08 -- host/auth.sh@45 -- # key=DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:26.563 22:14:08 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:26.563 22:14:08 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:26.563 22:14:08 -- host/auth.sh@49 -- # echo DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:26.563 22:14:08 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:22:26.563 22:14:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:26.563 22:14:08 -- host/auth.sh@68 -- # digest=sha384 00:22:26.563 22:14:08 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:26.563 22:14:08 -- host/auth.sh@68 -- # keyid=1 00:22:26.563 22:14:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:26.563 22:14:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.563 22:14:08 -- common/autotest_common.sh@10 -- # set +x 00:22:26.563 22:14:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.563 22:14:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:26.563 22:14:08 -- nvmf/common.sh@717 -- # local ip 00:22:26.563 22:14:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:26.563 22:14:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:26.563 22:14:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:26.563 22:14:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:26.563 22:14:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:26.563 22:14:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:26.563 22:14:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:26.563 22:14:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:26.563 22:14:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:26.563 22:14:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:26.563 22:14:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.563 22:14:08 -- common/autotest_common.sh@10 -- # set +x 00:22:26.563 nvme0n1 00:22:26.563 22:14:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.563 22:14:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:26.563 22:14:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.563 22:14:08 -- common/autotest_common.sh@10 -- # set +x 00:22:26.563 22:14:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:26.563 22:14:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.563 22:14:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.563 22:14:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:26.563 22:14:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.563 22:14:08 -- common/autotest_common.sh@10 -- # set +x 00:22:26.563 22:14:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.563 22:14:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:26.563 22:14:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:22:26.563 22:14:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:26.563 22:14:08 -- host/auth.sh@44 -- # digest=sha384 00:22:26.563 22:14:08 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:26.563 22:14:08 -- host/auth.sh@44 -- # keyid=2 00:22:26.563 22:14:08 -- host/auth.sh@45 -- # key=DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:26.563 22:14:08 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:26.563 22:14:08 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:26.563 22:14:08 -- host/auth.sh@49 -- # echo DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:26.563 22:14:08 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:22:26.563 22:14:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:26.563 22:14:08 -- host/auth.sh@68 -- # digest=sha384 00:22:26.563 22:14:08 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:26.563 22:14:08 -- host/auth.sh@68 -- # keyid=2 00:22:26.563 22:14:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:26.563 22:14:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.563 22:14:08 -- common/autotest_common.sh@10 -- # set +x 00:22:26.821 22:14:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.821 22:14:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:26.821 22:14:08 -- nvmf/common.sh@717 -- # local ip 00:22:26.821 22:14:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:26.821 22:14:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:26.821 22:14:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:26.821 22:14:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:26.821 22:14:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:26.821 22:14:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:26.821 22:14:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:26.821 22:14:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:26.821 22:14:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:26.821 22:14:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:26.821 22:14:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.821 22:14:08 -- common/autotest_common.sh@10 -- # set +x 00:22:26.821 nvme0n1 00:22:26.821 22:14:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.821 22:14:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:26.821 22:14:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.821 22:14:08 -- common/autotest_common.sh@10 -- # set +x 00:22:26.821 22:14:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:26.821 22:14:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.821 22:14:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.821 22:14:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:26.821 22:14:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.821 22:14:08 -- common/autotest_common.sh@10 -- # set +x 00:22:26.821 22:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.821 22:14:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:26.821 22:14:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:22:26.821 22:14:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:26.821 22:14:09 -- host/auth.sh@44 -- # digest=sha384 00:22:26.821 22:14:09 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:26.821 22:14:09 -- host/auth.sh@44 -- # keyid=3 00:22:26.821 22:14:09 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:26.821 22:14:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:26.821 22:14:09 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:26.821 22:14:09 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:26.821 22:14:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:22:26.821 22:14:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:26.821 22:14:09 -- host/auth.sh@68 -- # digest=sha384 00:22:26.821 22:14:09 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:26.821 22:14:09 -- host/auth.sh@68 -- # keyid=3 00:22:26.821 22:14:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:26.821 22:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.821 22:14:09 -- common/autotest_common.sh@10 -- # set +x 00:22:26.821 22:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.821 22:14:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:26.821 22:14:09 -- nvmf/common.sh@717 -- # local ip 00:22:26.821 22:14:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:26.821 22:14:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:26.821 22:14:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:26.821 22:14:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:26.821 22:14:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:26.821 22:14:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:26.821 22:14:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:26.821 22:14:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:26.821 22:14:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:26.821 22:14:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:26.821 22:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.821 22:14:09 -- common/autotest_common.sh@10 -- # set +x 00:22:27.079 nvme0n1 00:22:27.079 22:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.079 22:14:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:27.079 22:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.079 22:14:09 -- common/autotest_common.sh@10 -- # set +x 00:22:27.079 22:14:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:27.079 22:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.079 22:14:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.079 22:14:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:27.079 22:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.079 22:14:09 -- common/autotest_common.sh@10 -- # set +x 00:22:27.079 22:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.079 22:14:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:27.079 22:14:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:22:27.079 22:14:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:27.079 22:14:09 -- host/auth.sh@44 -- # digest=sha384 00:22:27.079 22:14:09 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:27.079 22:14:09 -- host/auth.sh@44 -- # keyid=4 00:22:27.079 22:14:09 -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:27.079 22:14:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:27.079 22:14:09 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:27.079 22:14:09 -- host/auth.sh@49 -- # echo DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:27.079 22:14:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:22:27.079 22:14:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:27.079 22:14:09 -- host/auth.sh@68 -- # digest=sha384 00:22:27.079 22:14:09 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:27.079 22:14:09 -- host/auth.sh@68 -- # keyid=4 00:22:27.079 22:14:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:27.079 22:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.079 22:14:09 -- common/autotest_common.sh@10 -- # set +x 00:22:27.079 22:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.079 22:14:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:27.079 22:14:09 -- nvmf/common.sh@717 -- # local ip 00:22:27.079 22:14:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:27.079 22:14:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:27.079 22:14:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:27.079 22:14:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:27.079 22:14:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:27.079 22:14:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:27.079 22:14:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:27.079 22:14:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:27.079 22:14:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:27.079 22:14:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:27.079 22:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.079 22:14:09 -- common/autotest_common.sh@10 -- # set +x 00:22:27.337 nvme0n1 00:22:27.337 22:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.337 22:14:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:27.337 22:14:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:27.337 22:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.337 22:14:09 -- common/autotest_common.sh@10 -- # set +x 00:22:27.337 22:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.337 22:14:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.337 22:14:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:27.337 22:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.337 22:14:09 -- common/autotest_common.sh@10 -- # set +x 00:22:27.337 22:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.337 22:14:09 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:27.337 22:14:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:27.337 22:14:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:22:27.337 22:14:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:27.337 22:14:09 -- host/auth.sh@44 -- # digest=sha384 00:22:27.337 22:14:09 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:27.337 22:14:09 -- host/auth.sh@44 -- # keyid=0 00:22:27.337 22:14:09 -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:27.337 22:14:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:27.337 22:14:09 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:27.337 22:14:09 -- host/auth.sh@49 -- # echo DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:27.337 22:14:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:22:27.337 22:14:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:27.337 22:14:09 -- host/auth.sh@68 -- # digest=sha384 00:22:27.337 22:14:09 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:27.337 22:14:09 -- host/auth.sh@68 -- # keyid=0 00:22:27.337 22:14:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:27.337 22:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.337 22:14:09 -- common/autotest_common.sh@10 -- # set +x 00:22:27.337 22:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.337 22:14:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:27.337 22:14:09 -- nvmf/common.sh@717 -- # local ip 00:22:27.337 22:14:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:27.337 22:14:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:27.337 22:14:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:27.337 22:14:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:27.337 22:14:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:27.337 22:14:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:27.337 22:14:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:27.337 22:14:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:27.337 22:14:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:27.337 22:14:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:27.337 22:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.337 22:14:09 -- common/autotest_common.sh@10 -- # set +x 00:22:27.595 nvme0n1 00:22:27.595 22:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.595 22:14:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:27.595 22:14:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:27.595 22:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.595 22:14:09 -- common/autotest_common.sh@10 -- # set +x 00:22:27.595 22:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.595 22:14:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.595 22:14:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:27.595 22:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.595 22:14:09 -- common/autotest_common.sh@10 -- # set +x 00:22:27.595 22:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.595 22:14:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:27.595 22:14:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:22:27.595 22:14:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:27.595 22:14:09 -- host/auth.sh@44 -- # digest=sha384 00:22:27.596 22:14:09 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:27.596 22:14:09 -- host/auth.sh@44 -- # keyid=1 00:22:27.596 22:14:09 -- host/auth.sh@45 -- # key=DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:27.596 22:14:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:27.596 22:14:09 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:27.596 22:14:09 -- host/auth.sh@49 -- # echo DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:27.596 22:14:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:22:27.596 22:14:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:27.596 22:14:09 -- host/auth.sh@68 -- # digest=sha384 00:22:27.596 22:14:09 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:27.596 22:14:09 -- host/auth.sh@68 -- # keyid=1 00:22:27.596 22:14:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:27.596 22:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.596 22:14:09 -- common/autotest_common.sh@10 -- # set +x 00:22:27.596 22:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.596 22:14:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:27.596 22:14:09 -- nvmf/common.sh@717 -- # local ip 00:22:27.596 22:14:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:27.596 22:14:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:27.596 22:14:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:27.596 22:14:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:27.596 22:14:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:27.596 22:14:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:27.596 22:14:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:27.596 22:14:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:27.596 22:14:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:27.596 22:14:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:27.596 22:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.596 22:14:09 -- common/autotest_common.sh@10 -- # set +x 00:22:27.855 nvme0n1 00:22:27.855 22:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.855 22:14:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:27.855 22:14:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:27.855 22:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.855 22:14:09 -- common/autotest_common.sh@10 -- # set +x 00:22:27.855 22:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.855 22:14:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.855 22:14:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:27.855 22:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.855 22:14:09 -- common/autotest_common.sh@10 -- # set +x 00:22:27.855 22:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.855 22:14:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:27.855 22:14:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:22:27.855 22:14:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:27.855 22:14:09 -- host/auth.sh@44 -- # digest=sha384 00:22:27.855 22:14:09 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:27.855 22:14:09 -- host/auth.sh@44 -- # keyid=2 00:22:27.855 22:14:09 -- host/auth.sh@45 -- # key=DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:27.855 22:14:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:27.855 22:14:09 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:27.855 22:14:09 -- host/auth.sh@49 -- # echo DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:27.855 22:14:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:22:27.855 22:14:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:27.855 22:14:09 -- host/auth.sh@68 -- # digest=sha384 00:22:27.855 22:14:09 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:27.855 22:14:09 -- host/auth.sh@68 -- # keyid=2 00:22:27.855 22:14:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:27.855 22:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.855 22:14:09 -- common/autotest_common.sh@10 -- # set +x 00:22:27.855 22:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.855 22:14:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:27.855 22:14:09 -- nvmf/common.sh@717 -- # local ip 00:22:27.855 22:14:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:27.855 22:14:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:27.855 22:14:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:27.855 22:14:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:27.855 22:14:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:27.855 22:14:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:27.855 22:14:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:27.855 22:14:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:27.855 22:14:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:27.855 22:14:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:27.855 22:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.855 22:14:09 -- common/autotest_common.sh@10 -- # set +x 00:22:28.113 nvme0n1 00:22:28.113 22:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.113 22:14:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:28.113 22:14:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:28.113 22:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.113 22:14:10 -- common/autotest_common.sh@10 -- # set +x 00:22:28.113 22:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.113 22:14:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.113 22:14:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:28.113 22:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.113 22:14:10 -- common/autotest_common.sh@10 -- # set +x 00:22:28.113 22:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.113 22:14:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:28.113 22:14:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:22:28.113 22:14:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:28.113 22:14:10 -- host/auth.sh@44 -- # digest=sha384 00:22:28.113 22:14:10 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:28.113 22:14:10 -- host/auth.sh@44 -- # keyid=3 00:22:28.113 22:14:10 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:28.113 22:14:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:28.113 22:14:10 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:28.113 22:14:10 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:28.113 22:14:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:22:28.113 22:14:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:28.113 22:14:10 -- host/auth.sh@68 -- # digest=sha384 00:22:28.113 22:14:10 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:28.113 22:14:10 -- host/auth.sh@68 -- # keyid=3 00:22:28.113 22:14:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:28.113 22:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.113 22:14:10 -- common/autotest_common.sh@10 -- # set +x 00:22:28.113 22:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.113 22:14:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:28.113 22:14:10 -- nvmf/common.sh@717 -- # local ip 00:22:28.113 22:14:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:28.113 22:14:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:28.113 22:14:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.113 22:14:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.113 22:14:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:28.113 22:14:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:28.113 22:14:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:28.113 22:14:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:28.113 22:14:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:28.113 22:14:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:28.113 22:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.113 22:14:10 -- common/autotest_common.sh@10 -- # set +x 00:22:28.371 nvme0n1 00:22:28.371 22:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.371 22:14:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:28.371 22:14:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:28.371 22:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.371 22:14:10 -- common/autotest_common.sh@10 -- # set +x 00:22:28.371 22:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.371 22:14:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.371 22:14:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:28.371 22:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.371 22:14:10 -- common/autotest_common.sh@10 -- # set +x 00:22:28.371 22:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.371 22:14:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:28.371 22:14:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:22:28.371 22:14:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:28.371 22:14:10 -- host/auth.sh@44 -- # digest=sha384 00:22:28.371 22:14:10 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:28.371 22:14:10 -- host/auth.sh@44 -- # keyid=4 00:22:28.371 22:14:10 -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:28.371 22:14:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:28.371 22:14:10 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:28.371 22:14:10 -- host/auth.sh@49 -- # echo DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:28.371 22:14:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:22:28.371 22:14:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:28.371 22:14:10 -- host/auth.sh@68 -- # digest=sha384 00:22:28.371 22:14:10 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:28.371 22:14:10 -- host/auth.sh@68 -- # keyid=4 00:22:28.371 22:14:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:28.371 22:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.371 22:14:10 -- common/autotest_common.sh@10 -- # set +x 00:22:28.371 22:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.371 22:14:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:28.371 22:14:10 -- nvmf/common.sh@717 -- # local ip 00:22:28.371 22:14:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:28.371 22:14:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:28.371 22:14:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.371 22:14:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.371 22:14:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:28.371 22:14:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:28.371 22:14:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:28.371 22:14:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:28.371 22:14:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:28.371 22:14:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:28.371 22:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.371 22:14:10 -- common/autotest_common.sh@10 -- # set +x 00:22:28.371 nvme0n1 00:22:28.371 22:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.371 22:14:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:28.371 22:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.371 22:14:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:28.371 22:14:10 -- common/autotest_common.sh@10 -- # set +x 00:22:28.371 22:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.628 22:14:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.628 22:14:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:28.628 22:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.628 22:14:10 -- common/autotest_common.sh@10 -- # set +x 00:22:28.628 22:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.628 22:14:10 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:28.628 22:14:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:28.628 22:14:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:22:28.628 22:14:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:28.628 22:14:10 -- host/auth.sh@44 -- # digest=sha384 00:22:28.628 22:14:10 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:28.628 22:14:10 -- host/auth.sh@44 -- # keyid=0 00:22:28.628 22:14:10 -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:28.628 22:14:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:28.628 22:14:10 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:28.628 22:14:10 -- host/auth.sh@49 -- # echo DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:28.628 22:14:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:22:28.628 22:14:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:28.628 22:14:10 -- host/auth.sh@68 -- # digest=sha384 00:22:28.628 22:14:10 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:28.628 22:14:10 -- host/auth.sh@68 -- # keyid=0 00:22:28.628 22:14:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:28.628 22:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.628 22:14:10 -- common/autotest_common.sh@10 -- # set +x 00:22:28.628 22:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.628 22:14:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:28.628 22:14:10 -- nvmf/common.sh@717 -- # local ip 00:22:28.628 22:14:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:28.628 22:14:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:28.628 22:14:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.628 22:14:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.628 22:14:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:28.628 22:14:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:28.628 22:14:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:28.628 22:14:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:28.628 22:14:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:28.628 22:14:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:28.628 22:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.628 22:14:10 -- common/autotest_common.sh@10 -- # set +x 00:22:28.886 nvme0n1 00:22:28.886 22:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.886 22:14:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:28.886 22:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.886 22:14:10 -- common/autotest_common.sh@10 -- # set +x 00:22:28.886 22:14:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:28.886 22:14:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.886 22:14:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.886 22:14:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:28.886 22:14:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.886 22:14:10 -- common/autotest_common.sh@10 -- # set +x 00:22:28.886 22:14:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.886 22:14:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:28.886 22:14:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:22:28.886 22:14:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:28.886 22:14:11 -- host/auth.sh@44 -- # digest=sha384 00:22:28.886 22:14:11 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:28.886 22:14:11 -- host/auth.sh@44 -- # keyid=1 00:22:28.886 22:14:11 -- host/auth.sh@45 -- # key=DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:28.886 22:14:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:28.886 22:14:11 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:28.886 22:14:11 -- host/auth.sh@49 -- # echo DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:28.886 22:14:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:22:28.886 22:14:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:28.886 22:14:11 -- host/auth.sh@68 -- # digest=sha384 00:22:28.886 22:14:11 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:28.886 22:14:11 -- host/auth.sh@68 -- # keyid=1 00:22:28.886 22:14:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:28.886 22:14:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.886 22:14:11 -- common/autotest_common.sh@10 -- # set +x 00:22:28.886 22:14:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.886 22:14:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:28.886 22:14:11 -- nvmf/common.sh@717 -- # local ip 00:22:28.886 22:14:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:28.886 22:14:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:28.886 22:14:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.886 22:14:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.886 22:14:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:28.886 22:14:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:28.886 22:14:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:28.886 22:14:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:28.886 22:14:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:28.886 22:14:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:28.886 22:14:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.886 22:14:11 -- common/autotest_common.sh@10 -- # set +x 00:22:29.144 nvme0n1 00:22:29.144 22:14:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.144 22:14:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.144 22:14:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.144 22:14:11 -- common/autotest_common.sh@10 -- # set +x 00:22:29.144 22:14:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:29.144 22:14:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.144 22:14:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.144 22:14:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.144 22:14:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.144 22:14:11 -- common/autotest_common.sh@10 -- # set +x 00:22:29.144 22:14:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.144 22:14:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:29.144 22:14:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:22:29.144 22:14:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:29.144 22:14:11 -- host/auth.sh@44 -- # digest=sha384 00:22:29.144 22:14:11 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:29.144 22:14:11 -- host/auth.sh@44 -- # keyid=2 00:22:29.144 22:14:11 -- host/auth.sh@45 -- # key=DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:29.144 22:14:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:29.144 22:14:11 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:29.144 22:14:11 -- host/auth.sh@49 -- # echo DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:29.144 22:14:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:22:29.144 22:14:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:29.144 22:14:11 -- host/auth.sh@68 -- # digest=sha384 00:22:29.144 22:14:11 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:29.144 22:14:11 -- host/auth.sh@68 -- # keyid=2 00:22:29.144 22:14:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:29.144 22:14:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.144 22:14:11 -- common/autotest_common.sh@10 -- # set +x 00:22:29.402 22:14:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.402 22:14:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:29.402 22:14:11 -- nvmf/common.sh@717 -- # local ip 00:22:29.402 22:14:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:29.402 22:14:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:29.402 22:14:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:29.402 22:14:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:29.402 22:14:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:29.402 22:14:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:29.402 22:14:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:29.402 22:14:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:29.402 22:14:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:29.402 22:14:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:29.402 22:14:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.402 22:14:11 -- common/autotest_common.sh@10 -- # set +x 00:22:29.660 nvme0n1 00:22:29.660 22:14:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.660 22:14:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.660 22:14:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.660 22:14:11 -- common/autotest_common.sh@10 -- # set +x 00:22:29.660 22:14:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:29.660 22:14:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.660 22:14:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.660 22:14:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.660 22:14:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.660 22:14:11 -- common/autotest_common.sh@10 -- # set +x 00:22:29.660 22:14:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.660 22:14:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:29.660 22:14:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:22:29.660 22:14:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:29.660 22:14:11 -- host/auth.sh@44 -- # digest=sha384 00:22:29.660 22:14:11 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:29.660 22:14:11 -- host/auth.sh@44 -- # keyid=3 00:22:29.660 22:14:11 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:29.660 22:14:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:29.660 22:14:11 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:29.660 22:14:11 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:29.660 22:14:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:22:29.660 22:14:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:29.660 22:14:11 -- host/auth.sh@68 -- # digest=sha384 00:22:29.660 22:14:11 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:29.660 22:14:11 -- host/auth.sh@68 -- # keyid=3 00:22:29.660 22:14:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:29.660 22:14:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.660 22:14:11 -- common/autotest_common.sh@10 -- # set +x 00:22:29.660 22:14:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.660 22:14:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:29.660 22:14:11 -- nvmf/common.sh@717 -- # local ip 00:22:29.660 22:14:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:29.660 22:14:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:29.660 22:14:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:29.660 22:14:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:29.660 22:14:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:29.660 22:14:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:29.660 22:14:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:29.660 22:14:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:29.660 22:14:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:29.660 22:14:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:29.660 22:14:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.660 22:14:11 -- common/autotest_common.sh@10 -- # set +x 00:22:29.918 nvme0n1 00:22:29.918 22:14:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.918 22:14:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.918 22:14:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.918 22:14:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:29.918 22:14:12 -- common/autotest_common.sh@10 -- # set +x 00:22:29.918 22:14:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.918 22:14:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.918 22:14:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.918 22:14:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.918 22:14:12 -- common/autotest_common.sh@10 -- # set +x 00:22:29.918 22:14:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.918 22:14:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:29.918 22:14:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:22:29.918 22:14:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:29.918 22:14:12 -- host/auth.sh@44 -- # digest=sha384 00:22:29.918 22:14:12 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:29.918 22:14:12 -- host/auth.sh@44 -- # keyid=4 00:22:29.918 22:14:12 -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:29.918 22:14:12 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:29.918 22:14:12 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:29.918 22:14:12 -- host/auth.sh@49 -- # echo DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:29.918 22:14:12 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:22:29.918 22:14:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:29.918 22:14:12 -- host/auth.sh@68 -- # digest=sha384 00:22:29.918 22:14:12 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:29.918 22:14:12 -- host/auth.sh@68 -- # keyid=4 00:22:29.918 22:14:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:29.918 22:14:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.918 22:14:12 -- common/autotest_common.sh@10 -- # set +x 00:22:30.176 22:14:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.176 22:14:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:30.176 22:14:12 -- nvmf/common.sh@717 -- # local ip 00:22:30.176 22:14:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:30.176 22:14:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:30.177 22:14:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:30.177 22:14:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:30.177 22:14:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:30.177 22:14:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:30.177 22:14:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:30.177 22:14:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:30.177 22:14:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:30.177 22:14:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:30.177 22:14:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.177 22:14:12 -- common/autotest_common.sh@10 -- # set +x 00:22:30.435 nvme0n1 00:22:30.435 22:14:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.435 22:14:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:30.435 22:14:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:30.435 22:14:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.435 22:14:12 -- common/autotest_common.sh@10 -- # set +x 00:22:30.435 22:14:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.435 22:14:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.435 22:14:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:30.435 22:14:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.435 22:14:12 -- common/autotest_common.sh@10 -- # set +x 00:22:30.435 22:14:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.435 22:14:12 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:30.435 22:14:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:30.435 22:14:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:22:30.435 22:14:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:30.435 22:14:12 -- host/auth.sh@44 -- # digest=sha384 00:22:30.435 22:14:12 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:30.435 22:14:12 -- host/auth.sh@44 -- # keyid=0 00:22:30.435 22:14:12 -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:30.435 22:14:12 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:30.435 22:14:12 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:30.435 22:14:12 -- host/auth.sh@49 -- # echo DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:30.435 22:14:12 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:22:30.435 22:14:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:30.435 22:14:12 -- host/auth.sh@68 -- # digest=sha384 00:22:30.435 22:14:12 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:30.435 22:14:12 -- host/auth.sh@68 -- # keyid=0 00:22:30.435 22:14:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:30.435 22:14:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.435 22:14:12 -- common/autotest_common.sh@10 -- # set +x 00:22:30.435 22:14:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.435 22:14:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:30.435 22:14:12 -- nvmf/common.sh@717 -- # local ip 00:22:30.435 22:14:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:30.435 22:14:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:30.435 22:14:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:30.435 22:14:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:30.435 22:14:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:30.435 22:14:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:30.435 22:14:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:30.435 22:14:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:30.435 22:14:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:30.435 22:14:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:30.435 22:14:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.435 22:14:12 -- common/autotest_common.sh@10 -- # set +x 00:22:31.041 nvme0n1 00:22:31.041 22:14:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.041 22:14:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:31.041 22:14:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.041 22:14:13 -- common/autotest_common.sh@10 -- # set +x 00:22:31.041 22:14:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:31.041 22:14:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.041 22:14:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.041 22:14:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:31.041 22:14:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.041 22:14:13 -- common/autotest_common.sh@10 -- # set +x 00:22:31.041 22:14:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.041 22:14:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:31.041 22:14:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:22:31.041 22:14:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:31.041 22:14:13 -- host/auth.sh@44 -- # digest=sha384 00:22:31.041 22:14:13 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:31.041 22:14:13 -- host/auth.sh@44 -- # keyid=1 00:22:31.041 22:14:13 -- host/auth.sh@45 -- # key=DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:31.041 22:14:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:31.041 22:14:13 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:31.041 22:14:13 -- host/auth.sh@49 -- # echo DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:31.041 22:14:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:22:31.041 22:14:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:31.041 22:14:13 -- host/auth.sh@68 -- # digest=sha384 00:22:31.041 22:14:13 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:31.041 22:14:13 -- host/auth.sh@68 -- # keyid=1 00:22:31.041 22:14:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:31.041 22:14:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.041 22:14:13 -- common/autotest_common.sh@10 -- # set +x 00:22:31.041 22:14:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.041 22:14:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:31.041 22:14:13 -- nvmf/common.sh@717 -- # local ip 00:22:31.041 22:14:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:31.041 22:14:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:31.041 22:14:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:31.041 22:14:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:31.041 22:14:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:31.041 22:14:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:31.041 22:14:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:31.041 22:14:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:31.041 22:14:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:31.041 22:14:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:31.041 22:14:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.041 22:14:13 -- common/autotest_common.sh@10 -- # set +x 00:22:31.608 nvme0n1 00:22:31.608 22:14:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.608 22:14:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:31.608 22:14:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:31.608 22:14:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.608 22:14:13 -- common/autotest_common.sh@10 -- # set +x 00:22:31.608 22:14:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.608 22:14:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.608 22:14:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:31.608 22:14:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.608 22:14:13 -- common/autotest_common.sh@10 -- # set +x 00:22:31.608 22:14:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.608 22:14:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:31.608 22:14:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:22:31.608 22:14:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:31.608 22:14:13 -- host/auth.sh@44 -- # digest=sha384 00:22:31.608 22:14:13 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:31.608 22:14:13 -- host/auth.sh@44 -- # keyid=2 00:22:31.608 22:14:13 -- host/auth.sh@45 -- # key=DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:31.608 22:14:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:31.608 22:14:13 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:31.608 22:14:13 -- host/auth.sh@49 -- # echo DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:31.608 22:14:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:22:31.608 22:14:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:31.608 22:14:13 -- host/auth.sh@68 -- # digest=sha384 00:22:31.608 22:14:13 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:31.608 22:14:13 -- host/auth.sh@68 -- # keyid=2 00:22:31.608 22:14:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:31.608 22:14:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.608 22:14:13 -- common/autotest_common.sh@10 -- # set +x 00:22:31.608 22:14:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.608 22:14:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:31.608 22:14:13 -- nvmf/common.sh@717 -- # local ip 00:22:31.608 22:14:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:31.608 22:14:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:31.608 22:14:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:31.608 22:14:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:31.608 22:14:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:31.608 22:14:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:31.608 22:14:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:31.608 22:14:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:31.608 22:14:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:31.608 22:14:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:31.608 22:14:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.608 22:14:13 -- common/autotest_common.sh@10 -- # set +x 00:22:32.179 nvme0n1 00:22:32.180 22:14:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.180 22:14:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.180 22:14:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.180 22:14:14 -- common/autotest_common.sh@10 -- # set +x 00:22:32.180 22:14:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:32.180 22:14:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.180 22:14:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.180 22:14:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.180 22:14:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.180 22:14:14 -- common/autotest_common.sh@10 -- # set +x 00:22:32.439 22:14:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.439 22:14:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:32.439 22:14:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:22:32.439 22:14:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:32.439 22:14:14 -- host/auth.sh@44 -- # digest=sha384 00:22:32.439 22:14:14 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:32.439 22:14:14 -- host/auth.sh@44 -- # keyid=3 00:22:32.439 22:14:14 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:32.439 22:14:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:32.439 22:14:14 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:32.439 22:14:14 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:32.439 22:14:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:22:32.439 22:14:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:32.439 22:14:14 -- host/auth.sh@68 -- # digest=sha384 00:22:32.439 22:14:14 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:32.439 22:14:14 -- host/auth.sh@68 -- # keyid=3 00:22:32.439 22:14:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:32.439 22:14:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.439 22:14:14 -- common/autotest_common.sh@10 -- # set +x 00:22:32.439 22:14:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.439 22:14:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:32.439 22:14:14 -- nvmf/common.sh@717 -- # local ip 00:22:32.439 22:14:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:32.439 22:14:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:32.439 22:14:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.439 22:14:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.439 22:14:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:32.439 22:14:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:32.439 22:14:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:32.439 22:14:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:32.439 22:14:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:32.440 22:14:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:32.440 22:14:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.440 22:14:14 -- common/autotest_common.sh@10 -- # set +x 00:22:33.005 nvme0n1 00:22:33.005 22:14:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.005 22:14:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.005 22:14:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.005 22:14:15 -- common/autotest_common.sh@10 -- # set +x 00:22:33.005 22:14:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:33.005 22:14:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.005 22:14:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.005 22:14:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.005 22:14:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.005 22:14:15 -- common/autotest_common.sh@10 -- # set +x 00:22:33.005 22:14:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.005 22:14:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:33.005 22:14:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:22:33.005 22:14:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:33.005 22:14:15 -- host/auth.sh@44 -- # digest=sha384 00:22:33.005 22:14:15 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:33.005 22:14:15 -- host/auth.sh@44 -- # keyid=4 00:22:33.005 22:14:15 -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:33.005 22:14:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:33.005 22:14:15 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:33.005 22:14:15 -- host/auth.sh@49 -- # echo DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:33.005 22:14:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:22:33.005 22:14:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:33.005 22:14:15 -- host/auth.sh@68 -- # digest=sha384 00:22:33.005 22:14:15 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:33.005 22:14:15 -- host/auth.sh@68 -- # keyid=4 00:22:33.005 22:14:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:33.005 22:14:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.005 22:14:15 -- common/autotest_common.sh@10 -- # set +x 00:22:33.005 22:14:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.005 22:14:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:33.005 22:14:15 -- nvmf/common.sh@717 -- # local ip 00:22:33.005 22:14:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:33.005 22:14:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:33.005 22:14:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.005 22:14:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.005 22:14:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:33.005 22:14:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.005 22:14:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:33.005 22:14:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:33.005 22:14:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:33.005 22:14:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:33.005 22:14:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.005 22:14:15 -- common/autotest_common.sh@10 -- # set +x 00:22:33.572 nvme0n1 00:22:33.572 22:14:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.572 22:14:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.572 22:14:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:33.572 22:14:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.572 22:14:15 -- common/autotest_common.sh@10 -- # set +x 00:22:33.572 22:14:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.572 22:14:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.572 22:14:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.572 22:14:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.572 22:14:15 -- common/autotest_common.sh@10 -- # set +x 00:22:33.831 22:14:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.831 22:14:15 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:33.831 22:14:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:33.831 22:14:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:22:33.831 22:14:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:33.831 22:14:15 -- host/auth.sh@44 -- # digest=sha384 00:22:33.831 22:14:15 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:33.831 22:14:15 -- host/auth.sh@44 -- # keyid=0 00:22:33.831 22:14:15 -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:33.831 22:14:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:33.831 22:14:15 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:33.831 22:14:15 -- host/auth.sh@49 -- # echo DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:33.831 22:14:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:22:33.831 22:14:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:33.831 22:14:15 -- host/auth.sh@68 -- # digest=sha384 00:22:33.831 22:14:15 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:33.831 22:14:15 -- host/auth.sh@68 -- # keyid=0 00:22:33.831 22:14:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:33.831 22:14:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.831 22:14:15 -- common/autotest_common.sh@10 -- # set +x 00:22:33.831 22:14:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.831 22:14:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:33.831 22:14:15 -- nvmf/common.sh@717 -- # local ip 00:22:33.831 22:14:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:33.831 22:14:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:33.831 22:14:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.831 22:14:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.831 22:14:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:33.831 22:14:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.831 22:14:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:33.831 22:14:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:33.831 22:14:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:33.831 22:14:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:33.831 22:14:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.831 22:14:15 -- common/autotest_common.sh@10 -- # set +x 00:22:34.765 nvme0n1 00:22:34.765 22:14:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.765 22:14:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.765 22:14:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.765 22:14:16 -- common/autotest_common.sh@10 -- # set +x 00:22:34.765 22:14:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:34.765 22:14:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.765 22:14:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.765 22:14:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.765 22:14:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.765 22:14:17 -- common/autotest_common.sh@10 -- # set +x 00:22:35.024 22:14:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.024 22:14:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:35.024 22:14:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:22:35.024 22:14:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:35.024 22:14:17 -- host/auth.sh@44 -- # digest=sha384 00:22:35.024 22:14:17 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:35.024 22:14:17 -- host/auth.sh@44 -- # keyid=1 00:22:35.024 22:14:17 -- host/auth.sh@45 -- # key=DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:35.024 22:14:17 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:35.024 22:14:17 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:35.024 22:14:17 -- host/auth.sh@49 -- # echo DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:35.024 22:14:17 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:22:35.024 22:14:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:35.024 22:14:17 -- host/auth.sh@68 -- # digest=sha384 00:22:35.024 22:14:17 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:35.024 22:14:17 -- host/auth.sh@68 -- # keyid=1 00:22:35.024 22:14:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:35.024 22:14:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.024 22:14:17 -- common/autotest_common.sh@10 -- # set +x 00:22:35.024 22:14:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.024 22:14:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:35.024 22:14:17 -- nvmf/common.sh@717 -- # local ip 00:22:35.024 22:14:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:35.024 22:14:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:35.024 22:14:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.024 22:14:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.024 22:14:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:35.024 22:14:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.024 22:14:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:35.024 22:14:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:35.024 22:14:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:35.024 22:14:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:35.024 22:14:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.024 22:14:17 -- common/autotest_common.sh@10 -- # set +x 00:22:35.958 nvme0n1 00:22:35.958 22:14:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.958 22:14:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.958 22:14:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.958 22:14:18 -- common/autotest_common.sh@10 -- # set +x 00:22:35.958 22:14:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:35.958 22:14:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.958 22:14:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.958 22:14:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.958 22:14:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.958 22:14:18 -- common/autotest_common.sh@10 -- # set +x 00:22:35.958 22:14:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.958 22:14:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:35.958 22:14:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:22:35.958 22:14:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:35.958 22:14:18 -- host/auth.sh@44 -- # digest=sha384 00:22:35.958 22:14:18 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:35.958 22:14:18 -- host/auth.sh@44 -- # keyid=2 00:22:35.958 22:14:18 -- host/auth.sh@45 -- # key=DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:35.958 22:14:18 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:35.958 22:14:18 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:35.958 22:14:18 -- host/auth.sh@49 -- # echo DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:35.958 22:14:18 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:22:35.958 22:14:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:35.958 22:14:18 -- host/auth.sh@68 -- # digest=sha384 00:22:35.958 22:14:18 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:35.958 22:14:18 -- host/auth.sh@68 -- # keyid=2 00:22:35.958 22:14:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:35.958 22:14:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.958 22:14:18 -- common/autotest_common.sh@10 -- # set +x 00:22:35.958 22:14:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.958 22:14:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:35.958 22:14:18 -- nvmf/common.sh@717 -- # local ip 00:22:35.958 22:14:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:35.958 22:14:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:35.958 22:14:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.958 22:14:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.958 22:14:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:35.958 22:14:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.958 22:14:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:35.958 22:14:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:35.958 22:14:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:35.958 22:14:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:35.958 22:14:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.958 22:14:18 -- common/autotest_common.sh@10 -- # set +x 00:22:37.331 nvme0n1 00:22:37.331 22:14:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:37.331 22:14:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:37.331 22:14:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:37.331 22:14:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.331 22:14:19 -- common/autotest_common.sh@10 -- # set +x 00:22:37.331 22:14:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:37.331 22:14:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.331 22:14:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:37.331 22:14:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.331 22:14:19 -- common/autotest_common.sh@10 -- # set +x 00:22:37.331 22:14:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:37.331 22:14:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:37.331 22:14:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:22:37.331 22:14:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:37.331 22:14:19 -- host/auth.sh@44 -- # digest=sha384 00:22:37.331 22:14:19 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:37.331 22:14:19 -- host/auth.sh@44 -- # keyid=3 00:22:37.331 22:14:19 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:37.331 22:14:19 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:37.331 22:14:19 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:37.331 22:14:19 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:37.331 22:14:19 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:22:37.331 22:14:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:37.331 22:14:19 -- host/auth.sh@68 -- # digest=sha384 00:22:37.331 22:14:19 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:37.331 22:14:19 -- host/auth.sh@68 -- # keyid=3 00:22:37.331 22:14:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:37.331 22:14:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.331 22:14:19 -- common/autotest_common.sh@10 -- # set +x 00:22:37.331 22:14:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:37.331 22:14:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:37.331 22:14:19 -- nvmf/common.sh@717 -- # local ip 00:22:37.331 22:14:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:37.331 22:14:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:37.331 22:14:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:37.331 22:14:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:37.331 22:14:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:37.331 22:14:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:37.331 22:14:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:37.331 22:14:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:37.331 22:14:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:37.331 22:14:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:37.331 22:14:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.331 22:14:19 -- common/autotest_common.sh@10 -- # set +x 00:22:38.288 nvme0n1 00:22:38.288 22:14:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.288 22:14:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:38.288 22:14:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:38.288 22:14:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.288 22:14:20 -- common/autotest_common.sh@10 -- # set +x 00:22:38.288 22:14:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.288 22:14:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.288 22:14:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:38.288 22:14:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.288 22:14:20 -- common/autotest_common.sh@10 -- # set +x 00:22:38.288 22:14:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.288 22:14:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:38.288 22:14:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:22:38.289 22:14:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:38.289 22:14:20 -- host/auth.sh@44 -- # digest=sha384 00:22:38.289 22:14:20 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:38.289 22:14:20 -- host/auth.sh@44 -- # keyid=4 00:22:38.289 22:14:20 -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:38.289 22:14:20 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:38.289 22:14:20 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:38.289 22:14:20 -- host/auth.sh@49 -- # echo DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:38.289 22:14:20 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:22:38.289 22:14:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:38.289 22:14:20 -- host/auth.sh@68 -- # digest=sha384 00:22:38.289 22:14:20 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:38.289 22:14:20 -- host/auth.sh@68 -- # keyid=4 00:22:38.289 22:14:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:38.289 22:14:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.289 22:14:20 -- common/autotest_common.sh@10 -- # set +x 00:22:38.289 22:14:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.289 22:14:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:38.289 22:14:20 -- nvmf/common.sh@717 -- # local ip 00:22:38.289 22:14:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:38.289 22:14:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:38.289 22:14:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:38.289 22:14:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:38.289 22:14:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:38.289 22:14:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:38.289 22:14:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:38.289 22:14:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:38.289 22:14:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:38.289 22:14:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:38.289 22:14:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.289 22:14:20 -- common/autotest_common.sh@10 -- # set +x 00:22:39.219 nvme0n1 00:22:39.219 22:14:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.477 22:14:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:39.477 22:14:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.477 22:14:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:39.477 22:14:21 -- common/autotest_common.sh@10 -- # set +x 00:22:39.477 22:14:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.477 22:14:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.477 22:14:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.477 22:14:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.477 22:14:21 -- common/autotest_common.sh@10 -- # set +x 00:22:39.477 22:14:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.477 22:14:21 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:22:39.477 22:14:21 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:39.477 22:14:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:39.477 22:14:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:22:39.477 22:14:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:39.477 22:14:21 -- host/auth.sh@44 -- # digest=sha512 00:22:39.477 22:14:21 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:39.477 22:14:21 -- host/auth.sh@44 -- # keyid=0 00:22:39.477 22:14:21 -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:39.477 22:14:21 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:39.477 22:14:21 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:39.477 22:14:21 -- host/auth.sh@49 -- # echo DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:39.477 22:14:21 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:22:39.477 22:14:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:39.477 22:14:21 -- host/auth.sh@68 -- # digest=sha512 00:22:39.477 22:14:21 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:39.477 22:14:21 -- host/auth.sh@68 -- # keyid=0 00:22:39.477 22:14:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:39.477 22:14:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.477 22:14:21 -- common/autotest_common.sh@10 -- # set +x 00:22:39.477 22:14:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.477 22:14:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:39.477 22:14:21 -- nvmf/common.sh@717 -- # local ip 00:22:39.477 22:14:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:39.477 22:14:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:39.477 22:14:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.477 22:14:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.477 22:14:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:39.477 22:14:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:39.477 22:14:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:39.477 22:14:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:39.477 22:14:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:39.477 22:14:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:39.477 22:14:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.477 22:14:21 -- common/autotest_common.sh@10 -- # set +x 00:22:39.477 nvme0n1 00:22:39.477 22:14:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.477 22:14:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:39.477 22:14:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.477 22:14:21 -- common/autotest_common.sh@10 -- # set +x 00:22:39.477 22:14:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:39.477 22:14:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.477 22:14:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.477 22:14:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.477 22:14:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.477 22:14:21 -- common/autotest_common.sh@10 -- # set +x 00:22:39.477 22:14:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.477 22:14:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:39.477 22:14:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:22:39.477 22:14:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:39.477 22:14:21 -- host/auth.sh@44 -- # digest=sha512 00:22:39.477 22:14:21 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:39.477 22:14:21 -- host/auth.sh@44 -- # keyid=1 00:22:39.477 22:14:21 -- host/auth.sh@45 -- # key=DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:39.477 22:14:21 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:39.477 22:14:21 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:39.477 22:14:21 -- host/auth.sh@49 -- # echo DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:39.477 22:14:21 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:22:39.477 22:14:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:39.477 22:14:21 -- host/auth.sh@68 -- # digest=sha512 00:22:39.477 22:14:21 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:39.477 22:14:21 -- host/auth.sh@68 -- # keyid=1 00:22:39.477 22:14:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:39.477 22:14:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.477 22:14:21 -- common/autotest_common.sh@10 -- # set +x 00:22:39.736 22:14:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.736 22:14:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:39.736 22:14:21 -- nvmf/common.sh@717 -- # local ip 00:22:39.736 22:14:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:39.736 22:14:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:39.736 22:14:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.736 22:14:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.736 22:14:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:39.736 22:14:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:39.736 22:14:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:39.736 22:14:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:39.736 22:14:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:39.736 22:14:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:39.736 22:14:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.736 22:14:21 -- common/autotest_common.sh@10 -- # set +x 00:22:39.736 nvme0n1 00:22:39.736 22:14:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.736 22:14:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:39.736 22:14:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.736 22:14:21 -- common/autotest_common.sh@10 -- # set +x 00:22:39.736 22:14:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:39.736 22:14:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.736 22:14:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.736 22:14:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.736 22:14:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.736 22:14:21 -- common/autotest_common.sh@10 -- # set +x 00:22:39.736 22:14:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.736 22:14:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:39.736 22:14:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:22:39.736 22:14:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:39.736 22:14:21 -- host/auth.sh@44 -- # digest=sha512 00:22:39.736 22:14:21 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:39.736 22:14:21 -- host/auth.sh@44 -- # keyid=2 00:22:39.736 22:14:21 -- host/auth.sh@45 -- # key=DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:39.736 22:14:21 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:39.736 22:14:21 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:39.736 22:14:21 -- host/auth.sh@49 -- # echo DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:39.736 22:14:21 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:22:39.736 22:14:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:39.736 22:14:21 -- host/auth.sh@68 -- # digest=sha512 00:22:39.736 22:14:21 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:39.736 22:14:21 -- host/auth.sh@68 -- # keyid=2 00:22:39.736 22:14:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:39.736 22:14:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.736 22:14:21 -- common/autotest_common.sh@10 -- # set +x 00:22:39.736 22:14:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.736 22:14:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:39.736 22:14:21 -- nvmf/common.sh@717 -- # local ip 00:22:39.736 22:14:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:39.736 22:14:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:39.736 22:14:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.736 22:14:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.736 22:14:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:39.736 22:14:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:39.736 22:14:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:39.736 22:14:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:39.736 22:14:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:39.736 22:14:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:39.736 22:14:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.736 22:14:21 -- common/autotest_common.sh@10 -- # set +x 00:22:39.994 nvme0n1 00:22:39.994 22:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.994 22:14:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:39.994 22:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.994 22:14:22 -- common/autotest_common.sh@10 -- # set +x 00:22:39.994 22:14:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:39.994 22:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.994 22:14:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.994 22:14:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.994 22:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.994 22:14:22 -- common/autotest_common.sh@10 -- # set +x 00:22:39.994 22:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.994 22:14:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:39.994 22:14:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:22:39.994 22:14:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:39.994 22:14:22 -- host/auth.sh@44 -- # digest=sha512 00:22:39.994 22:14:22 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:39.994 22:14:22 -- host/auth.sh@44 -- # keyid=3 00:22:39.994 22:14:22 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:39.994 22:14:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:39.994 22:14:22 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:39.994 22:14:22 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:39.994 22:14:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:22:39.994 22:14:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:39.994 22:14:22 -- host/auth.sh@68 -- # digest=sha512 00:22:39.995 22:14:22 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:39.995 22:14:22 -- host/auth.sh@68 -- # keyid=3 00:22:39.995 22:14:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:39.995 22:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.995 22:14:22 -- common/autotest_common.sh@10 -- # set +x 00:22:39.995 22:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.995 22:14:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:39.995 22:14:22 -- nvmf/common.sh@717 -- # local ip 00:22:39.995 22:14:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:39.995 22:14:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:39.995 22:14:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.995 22:14:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.995 22:14:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:39.995 22:14:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:39.995 22:14:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:39.995 22:14:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:39.995 22:14:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:39.995 22:14:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:39.995 22:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.995 22:14:22 -- common/autotest_common.sh@10 -- # set +x 00:22:40.253 nvme0n1 00:22:40.253 22:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.253 22:14:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.253 22:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.253 22:14:22 -- common/autotest_common.sh@10 -- # set +x 00:22:40.253 22:14:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:40.253 22:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.253 22:14:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.253 22:14:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.253 22:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.253 22:14:22 -- common/autotest_common.sh@10 -- # set +x 00:22:40.253 22:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.253 22:14:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:40.253 22:14:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:22:40.253 22:14:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:40.253 22:14:22 -- host/auth.sh@44 -- # digest=sha512 00:22:40.253 22:14:22 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:40.253 22:14:22 -- host/auth.sh@44 -- # keyid=4 00:22:40.253 22:14:22 -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:40.253 22:14:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:40.253 22:14:22 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:40.253 22:14:22 -- host/auth.sh@49 -- # echo DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:40.253 22:14:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:22:40.253 22:14:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:40.253 22:14:22 -- host/auth.sh@68 -- # digest=sha512 00:22:40.253 22:14:22 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:40.253 22:14:22 -- host/auth.sh@68 -- # keyid=4 00:22:40.253 22:14:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:40.253 22:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.253 22:14:22 -- common/autotest_common.sh@10 -- # set +x 00:22:40.253 22:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.253 22:14:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:40.253 22:14:22 -- nvmf/common.sh@717 -- # local ip 00:22:40.253 22:14:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:40.253 22:14:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:40.253 22:14:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.253 22:14:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.253 22:14:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:40.253 22:14:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:40.253 22:14:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:40.253 22:14:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:40.253 22:14:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:40.253 22:14:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:40.253 22:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.253 22:14:22 -- common/autotest_common.sh@10 -- # set +x 00:22:40.253 nvme0n1 00:22:40.253 22:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.253 22:14:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.253 22:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.253 22:14:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:40.253 22:14:22 -- common/autotest_common.sh@10 -- # set +x 00:22:40.253 22:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.253 22:14:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.253 22:14:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.253 22:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.253 22:14:22 -- common/autotest_common.sh@10 -- # set +x 00:22:40.512 22:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.512 22:14:22 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:40.512 22:14:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:40.512 22:14:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:22:40.512 22:14:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:40.512 22:14:22 -- host/auth.sh@44 -- # digest=sha512 00:22:40.512 22:14:22 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:40.512 22:14:22 -- host/auth.sh@44 -- # keyid=0 00:22:40.512 22:14:22 -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:40.512 22:14:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:40.512 22:14:22 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:40.512 22:14:22 -- host/auth.sh@49 -- # echo DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:40.512 22:14:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:22:40.512 22:14:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:40.512 22:14:22 -- host/auth.sh@68 -- # digest=sha512 00:22:40.512 22:14:22 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:40.512 22:14:22 -- host/auth.sh@68 -- # keyid=0 00:22:40.512 22:14:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:40.512 22:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.512 22:14:22 -- common/autotest_common.sh@10 -- # set +x 00:22:40.512 22:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.512 22:14:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:40.512 22:14:22 -- nvmf/common.sh@717 -- # local ip 00:22:40.512 22:14:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:40.512 22:14:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:40.512 22:14:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.512 22:14:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.512 22:14:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:40.512 22:14:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:40.512 22:14:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:40.512 22:14:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:40.512 22:14:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:40.512 22:14:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:40.512 22:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.512 22:14:22 -- common/autotest_common.sh@10 -- # set +x 00:22:40.512 nvme0n1 00:22:40.512 22:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.512 22:14:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.512 22:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.512 22:14:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:40.512 22:14:22 -- common/autotest_common.sh@10 -- # set +x 00:22:40.512 22:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.512 22:14:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.512 22:14:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.512 22:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.512 22:14:22 -- common/autotest_common.sh@10 -- # set +x 00:22:40.512 22:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.512 22:14:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:40.512 22:14:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:22:40.512 22:14:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:40.512 22:14:22 -- host/auth.sh@44 -- # digest=sha512 00:22:40.512 22:14:22 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:40.512 22:14:22 -- host/auth.sh@44 -- # keyid=1 00:22:40.512 22:14:22 -- host/auth.sh@45 -- # key=DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:40.512 22:14:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:40.512 22:14:22 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:40.512 22:14:22 -- host/auth.sh@49 -- # echo DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:40.512 22:14:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:22:40.512 22:14:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:40.512 22:14:22 -- host/auth.sh@68 -- # digest=sha512 00:22:40.512 22:14:22 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:40.512 22:14:22 -- host/auth.sh@68 -- # keyid=1 00:22:40.512 22:14:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:40.512 22:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.512 22:14:22 -- common/autotest_common.sh@10 -- # set +x 00:22:40.512 22:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.512 22:14:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:40.512 22:14:22 -- nvmf/common.sh@717 -- # local ip 00:22:40.771 22:14:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:40.771 22:14:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:40.771 22:14:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.771 22:14:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.771 22:14:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:40.771 22:14:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:40.771 22:14:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:40.771 22:14:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:40.771 22:14:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:40.771 22:14:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:40.771 22:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.771 22:14:22 -- common/autotest_common.sh@10 -- # set +x 00:22:40.771 nvme0n1 00:22:40.771 22:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.771 22:14:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.771 22:14:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:40.771 22:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.771 22:14:22 -- common/autotest_common.sh@10 -- # set +x 00:22:40.771 22:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.771 22:14:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.771 22:14:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.771 22:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.771 22:14:22 -- common/autotest_common.sh@10 -- # set +x 00:22:40.771 22:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.771 22:14:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:40.771 22:14:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:22:40.771 22:14:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:40.771 22:14:22 -- host/auth.sh@44 -- # digest=sha512 00:22:40.771 22:14:22 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:40.771 22:14:22 -- host/auth.sh@44 -- # keyid=2 00:22:40.771 22:14:22 -- host/auth.sh@45 -- # key=DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:40.771 22:14:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:40.771 22:14:22 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:40.771 22:14:22 -- host/auth.sh@49 -- # echo DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:40.771 22:14:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:22:40.771 22:14:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:40.771 22:14:22 -- host/auth.sh@68 -- # digest=sha512 00:22:40.771 22:14:22 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:40.771 22:14:22 -- host/auth.sh@68 -- # keyid=2 00:22:40.771 22:14:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:40.771 22:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.771 22:14:22 -- common/autotest_common.sh@10 -- # set +x 00:22:40.771 22:14:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.771 22:14:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:40.771 22:14:23 -- nvmf/common.sh@717 -- # local ip 00:22:40.771 22:14:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:40.771 22:14:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:40.771 22:14:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.771 22:14:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.771 22:14:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:40.771 22:14:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:40.771 22:14:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:40.771 22:14:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:40.771 22:14:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:40.771 22:14:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:40.771 22:14:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.771 22:14:23 -- common/autotest_common.sh@10 -- # set +x 00:22:41.029 nvme0n1 00:22:41.029 22:14:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:41.029 22:14:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.029 22:14:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:41.029 22:14:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:41.029 22:14:23 -- common/autotest_common.sh@10 -- # set +x 00:22:41.029 22:14:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:41.029 22:14:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.029 22:14:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.029 22:14:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:41.029 22:14:23 -- common/autotest_common.sh@10 -- # set +x 00:22:41.029 22:14:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:41.029 22:14:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:41.029 22:14:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:22:41.029 22:14:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:41.029 22:14:23 -- host/auth.sh@44 -- # digest=sha512 00:22:41.029 22:14:23 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:41.029 22:14:23 -- host/auth.sh@44 -- # keyid=3 00:22:41.030 22:14:23 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:41.030 22:14:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:41.030 22:14:23 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:41.030 22:14:23 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:41.030 22:14:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:22:41.030 22:14:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:41.030 22:14:23 -- host/auth.sh@68 -- # digest=sha512 00:22:41.030 22:14:23 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:41.030 22:14:23 -- host/auth.sh@68 -- # keyid=3 00:22:41.030 22:14:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:41.030 22:14:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:41.030 22:14:23 -- common/autotest_common.sh@10 -- # set +x 00:22:41.030 22:14:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:41.030 22:14:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:41.030 22:14:23 -- nvmf/common.sh@717 -- # local ip 00:22:41.030 22:14:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:41.030 22:14:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:41.030 22:14:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.030 22:14:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.030 22:14:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:41.030 22:14:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:41.030 22:14:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:41.030 22:14:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:41.030 22:14:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:41.030 22:14:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:41.030 22:14:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:41.030 22:14:23 -- common/autotest_common.sh@10 -- # set +x 00:22:41.288 nvme0n1 00:22:41.288 22:14:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:41.288 22:14:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.288 22:14:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:41.288 22:14:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:41.288 22:14:23 -- common/autotest_common.sh@10 -- # set +x 00:22:41.288 22:14:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:41.288 22:14:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.288 22:14:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.288 22:14:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:41.288 22:14:23 -- common/autotest_common.sh@10 -- # set +x 00:22:41.288 22:14:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:41.288 22:14:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:41.288 22:14:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:22:41.288 22:14:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:41.288 22:14:23 -- host/auth.sh@44 -- # digest=sha512 00:22:41.288 22:14:23 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:41.288 22:14:23 -- host/auth.sh@44 -- # keyid=4 00:22:41.288 22:14:23 -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:41.288 22:14:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:41.288 22:14:23 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:41.288 22:14:23 -- host/auth.sh@49 -- # echo DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:41.288 22:14:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:22:41.288 22:14:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:41.288 22:14:23 -- host/auth.sh@68 -- # digest=sha512 00:22:41.288 22:14:23 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:41.288 22:14:23 -- host/auth.sh@68 -- # keyid=4 00:22:41.288 22:14:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:41.288 22:14:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:41.288 22:14:23 -- common/autotest_common.sh@10 -- # set +x 00:22:41.288 22:14:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:41.288 22:14:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:41.288 22:14:23 -- nvmf/common.sh@717 -- # local ip 00:22:41.288 22:14:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:41.288 22:14:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:41.288 22:14:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.288 22:14:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.288 22:14:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:41.288 22:14:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:41.288 22:14:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:41.288 22:14:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:41.288 22:14:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:41.288 22:14:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:41.288 22:14:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:41.288 22:14:23 -- common/autotest_common.sh@10 -- # set +x 00:22:41.546 nvme0n1 00:22:41.546 22:14:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:41.546 22:14:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.546 22:14:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:41.546 22:14:23 -- common/autotest_common.sh@10 -- # set +x 00:22:41.546 22:14:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:41.546 22:14:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:41.546 22:14:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.546 22:14:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.546 22:14:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:41.547 22:14:23 -- common/autotest_common.sh@10 -- # set +x 00:22:41.547 22:14:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:41.547 22:14:23 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:41.547 22:14:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:41.547 22:14:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:22:41.547 22:14:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:41.547 22:14:23 -- host/auth.sh@44 -- # digest=sha512 00:22:41.547 22:14:23 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:41.547 22:14:23 -- host/auth.sh@44 -- # keyid=0 00:22:41.547 22:14:23 -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:41.547 22:14:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:41.547 22:14:23 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:41.547 22:14:23 -- host/auth.sh@49 -- # echo DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:41.547 22:14:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:22:41.547 22:14:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:41.547 22:14:23 -- host/auth.sh@68 -- # digest=sha512 00:22:41.547 22:14:23 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:41.547 22:14:23 -- host/auth.sh@68 -- # keyid=0 00:22:41.547 22:14:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:41.547 22:14:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:41.547 22:14:23 -- common/autotest_common.sh@10 -- # set +x 00:22:41.547 22:14:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:41.547 22:14:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:41.547 22:14:23 -- nvmf/common.sh@717 -- # local ip 00:22:41.547 22:14:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:41.547 22:14:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:41.547 22:14:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.547 22:14:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.547 22:14:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:41.547 22:14:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:41.547 22:14:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:41.547 22:14:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:41.547 22:14:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:41.547 22:14:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:41.547 22:14:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:41.547 22:14:23 -- common/autotest_common.sh@10 -- # set +x 00:22:41.804 nvme0n1 00:22:41.804 22:14:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:41.804 22:14:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.804 22:14:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:41.804 22:14:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:41.804 22:14:24 -- common/autotest_common.sh@10 -- # set +x 00:22:41.804 22:14:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:42.063 22:14:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.063 22:14:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.063 22:14:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:42.063 22:14:24 -- common/autotest_common.sh@10 -- # set +x 00:22:42.063 22:14:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:42.063 22:14:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:42.063 22:14:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:22:42.063 22:14:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:42.063 22:14:24 -- host/auth.sh@44 -- # digest=sha512 00:22:42.063 22:14:24 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:42.063 22:14:24 -- host/auth.sh@44 -- # keyid=1 00:22:42.063 22:14:24 -- host/auth.sh@45 -- # key=DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:42.063 22:14:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:42.063 22:14:24 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:42.063 22:14:24 -- host/auth.sh@49 -- # echo DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:42.063 22:14:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:22:42.063 22:14:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:42.063 22:14:24 -- host/auth.sh@68 -- # digest=sha512 00:22:42.063 22:14:24 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:42.063 22:14:24 -- host/auth.sh@68 -- # keyid=1 00:22:42.063 22:14:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:42.063 22:14:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:42.063 22:14:24 -- common/autotest_common.sh@10 -- # set +x 00:22:42.063 22:14:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:42.063 22:14:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:42.063 22:14:24 -- nvmf/common.sh@717 -- # local ip 00:22:42.063 22:14:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:42.063 22:14:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:42.063 22:14:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.063 22:14:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.063 22:14:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:42.063 22:14:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.063 22:14:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:42.063 22:14:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:42.063 22:14:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:42.063 22:14:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:42.063 22:14:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:42.063 22:14:24 -- common/autotest_common.sh@10 -- # set +x 00:22:42.321 nvme0n1 00:22:42.321 22:14:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:42.321 22:14:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.321 22:14:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:42.321 22:14:24 -- common/autotest_common.sh@10 -- # set +x 00:22:42.321 22:14:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:42.321 22:14:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:42.321 22:14:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.321 22:14:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.321 22:14:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:42.321 22:14:24 -- common/autotest_common.sh@10 -- # set +x 00:22:42.321 22:14:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:42.321 22:14:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:42.321 22:14:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:22:42.321 22:14:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:42.321 22:14:24 -- host/auth.sh@44 -- # digest=sha512 00:22:42.321 22:14:24 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:42.321 22:14:24 -- host/auth.sh@44 -- # keyid=2 00:22:42.321 22:14:24 -- host/auth.sh@45 -- # key=DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:42.321 22:14:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:42.321 22:14:24 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:42.321 22:14:24 -- host/auth.sh@49 -- # echo DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:42.321 22:14:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:22:42.321 22:14:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:42.321 22:14:24 -- host/auth.sh@68 -- # digest=sha512 00:22:42.321 22:14:24 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:42.321 22:14:24 -- host/auth.sh@68 -- # keyid=2 00:22:42.321 22:14:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:42.321 22:14:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:42.321 22:14:24 -- common/autotest_common.sh@10 -- # set +x 00:22:42.321 22:14:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:42.321 22:14:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:42.321 22:14:24 -- nvmf/common.sh@717 -- # local ip 00:22:42.321 22:14:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:42.321 22:14:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:42.321 22:14:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.321 22:14:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.321 22:14:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:42.321 22:14:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.321 22:14:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:42.321 22:14:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:42.321 22:14:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:42.321 22:14:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:42.321 22:14:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:42.321 22:14:24 -- common/autotest_common.sh@10 -- # set +x 00:22:42.580 nvme0n1 00:22:42.580 22:14:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:42.580 22:14:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.580 22:14:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:42.580 22:14:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:42.580 22:14:24 -- common/autotest_common.sh@10 -- # set +x 00:22:42.580 22:14:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:42.580 22:14:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.580 22:14:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.580 22:14:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:42.580 22:14:24 -- common/autotest_common.sh@10 -- # set +x 00:22:42.580 22:14:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:42.580 22:14:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:42.580 22:14:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:22:42.580 22:14:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:42.580 22:14:24 -- host/auth.sh@44 -- # digest=sha512 00:22:42.580 22:14:24 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:42.580 22:14:24 -- host/auth.sh@44 -- # keyid=3 00:22:42.580 22:14:24 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:42.580 22:14:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:42.580 22:14:24 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:42.580 22:14:24 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:42.580 22:14:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:22:42.580 22:14:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:42.580 22:14:24 -- host/auth.sh@68 -- # digest=sha512 00:22:42.580 22:14:24 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:42.580 22:14:24 -- host/auth.sh@68 -- # keyid=3 00:22:42.580 22:14:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:42.580 22:14:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:42.580 22:14:24 -- common/autotest_common.sh@10 -- # set +x 00:22:42.838 22:14:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:42.838 22:14:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:42.838 22:14:24 -- nvmf/common.sh@717 -- # local ip 00:22:42.838 22:14:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:42.838 22:14:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:42.838 22:14:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.838 22:14:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.838 22:14:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:42.838 22:14:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.838 22:14:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:42.838 22:14:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:42.838 22:14:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:42.838 22:14:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:42.838 22:14:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:42.838 22:14:24 -- common/autotest_common.sh@10 -- # set +x 00:22:42.838 nvme0n1 00:22:42.838 22:14:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:42.838 22:14:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.838 22:14:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:42.838 22:14:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:42.838 22:14:25 -- common/autotest_common.sh@10 -- # set +x 00:22:42.838 22:14:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.095 22:14:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.095 22:14:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.095 22:14:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.095 22:14:25 -- common/autotest_common.sh@10 -- # set +x 00:22:43.095 22:14:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.095 22:14:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:43.095 22:14:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:22:43.095 22:14:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:43.095 22:14:25 -- host/auth.sh@44 -- # digest=sha512 00:22:43.095 22:14:25 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:43.095 22:14:25 -- host/auth.sh@44 -- # keyid=4 00:22:43.095 22:14:25 -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:43.095 22:14:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:43.095 22:14:25 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:43.095 22:14:25 -- host/auth.sh@49 -- # echo DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:43.095 22:14:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:22:43.095 22:14:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:43.095 22:14:25 -- host/auth.sh@68 -- # digest=sha512 00:22:43.095 22:14:25 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:43.095 22:14:25 -- host/auth.sh@68 -- # keyid=4 00:22:43.095 22:14:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:43.095 22:14:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.095 22:14:25 -- common/autotest_common.sh@10 -- # set +x 00:22:43.095 22:14:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.095 22:14:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:43.095 22:14:25 -- nvmf/common.sh@717 -- # local ip 00:22:43.095 22:14:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:43.095 22:14:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:43.095 22:14:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.095 22:14:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.096 22:14:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:43.096 22:14:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.096 22:14:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:43.096 22:14:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:43.096 22:14:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:43.096 22:14:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:43.096 22:14:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.096 22:14:25 -- common/autotest_common.sh@10 -- # set +x 00:22:43.353 nvme0n1 00:22:43.353 22:14:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.353 22:14:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.353 22:14:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.353 22:14:25 -- common/autotest_common.sh@10 -- # set +x 00:22:43.353 22:14:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:43.353 22:14:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.353 22:14:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.353 22:14:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.353 22:14:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.353 22:14:25 -- common/autotest_common.sh@10 -- # set +x 00:22:43.353 22:14:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.353 22:14:25 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:43.353 22:14:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:43.353 22:14:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:22:43.353 22:14:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:43.353 22:14:25 -- host/auth.sh@44 -- # digest=sha512 00:22:43.353 22:14:25 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:43.353 22:14:25 -- host/auth.sh@44 -- # keyid=0 00:22:43.353 22:14:25 -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:43.353 22:14:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:43.353 22:14:25 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:43.353 22:14:25 -- host/auth.sh@49 -- # echo DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:43.353 22:14:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:22:43.353 22:14:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:43.353 22:14:25 -- host/auth.sh@68 -- # digest=sha512 00:22:43.353 22:14:25 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:43.353 22:14:25 -- host/auth.sh@68 -- # keyid=0 00:22:43.353 22:14:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:43.354 22:14:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.354 22:14:25 -- common/autotest_common.sh@10 -- # set +x 00:22:43.354 22:14:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.354 22:14:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:43.354 22:14:25 -- nvmf/common.sh@717 -- # local ip 00:22:43.354 22:14:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:43.354 22:14:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:43.354 22:14:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.354 22:14:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.354 22:14:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:43.354 22:14:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.354 22:14:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:43.354 22:14:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:43.354 22:14:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:43.354 22:14:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:43.354 22:14:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.354 22:14:25 -- common/autotest_common.sh@10 -- # set +x 00:22:43.919 nvme0n1 00:22:43.919 22:14:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.919 22:14:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.920 22:14:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.920 22:14:26 -- common/autotest_common.sh@10 -- # set +x 00:22:43.920 22:14:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:43.920 22:14:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.920 22:14:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.920 22:14:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.920 22:14:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.920 22:14:26 -- common/autotest_common.sh@10 -- # set +x 00:22:43.920 22:14:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.920 22:14:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:43.920 22:14:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:22:43.920 22:14:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:43.920 22:14:26 -- host/auth.sh@44 -- # digest=sha512 00:22:43.920 22:14:26 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:43.920 22:14:26 -- host/auth.sh@44 -- # keyid=1 00:22:43.920 22:14:26 -- host/auth.sh@45 -- # key=DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:43.920 22:14:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:43.920 22:14:26 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:43.920 22:14:26 -- host/auth.sh@49 -- # echo DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:43.920 22:14:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:22:43.920 22:14:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:43.920 22:14:26 -- host/auth.sh@68 -- # digest=sha512 00:22:43.920 22:14:26 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:43.920 22:14:26 -- host/auth.sh@68 -- # keyid=1 00:22:43.920 22:14:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:43.920 22:14:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.920 22:14:26 -- common/autotest_common.sh@10 -- # set +x 00:22:43.920 22:14:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.920 22:14:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:43.920 22:14:26 -- nvmf/common.sh@717 -- # local ip 00:22:43.920 22:14:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:43.920 22:14:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:43.920 22:14:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.920 22:14:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.920 22:14:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:43.920 22:14:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.920 22:14:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:43.920 22:14:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:43.920 22:14:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:43.920 22:14:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:43.920 22:14:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.920 22:14:26 -- common/autotest_common.sh@10 -- # set +x 00:22:44.855 nvme0n1 00:22:44.855 22:14:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.855 22:14:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.855 22:14:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:44.855 22:14:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.855 22:14:26 -- common/autotest_common.sh@10 -- # set +x 00:22:44.855 22:14:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.855 22:14:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.855 22:14:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.855 22:14:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.855 22:14:26 -- common/autotest_common.sh@10 -- # set +x 00:22:44.855 22:14:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.855 22:14:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:44.855 22:14:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:22:44.855 22:14:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:44.855 22:14:26 -- host/auth.sh@44 -- # digest=sha512 00:22:44.855 22:14:26 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:44.855 22:14:26 -- host/auth.sh@44 -- # keyid=2 00:22:44.855 22:14:26 -- host/auth.sh@45 -- # key=DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:44.855 22:14:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:44.855 22:14:26 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:44.855 22:14:26 -- host/auth.sh@49 -- # echo DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:44.855 22:14:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:22:44.855 22:14:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:44.855 22:14:26 -- host/auth.sh@68 -- # digest=sha512 00:22:44.855 22:14:26 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:44.855 22:14:26 -- host/auth.sh@68 -- # keyid=2 00:22:44.855 22:14:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:44.855 22:14:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.855 22:14:26 -- common/autotest_common.sh@10 -- # set +x 00:22:44.855 22:14:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.855 22:14:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:44.855 22:14:26 -- nvmf/common.sh@717 -- # local ip 00:22:44.855 22:14:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:44.855 22:14:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:44.855 22:14:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.855 22:14:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.855 22:14:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:44.855 22:14:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:44.855 22:14:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:44.855 22:14:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:44.855 22:14:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:44.855 22:14:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:44.855 22:14:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.855 22:14:26 -- common/autotest_common.sh@10 -- # set +x 00:22:45.421 nvme0n1 00:22:45.421 22:14:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.421 22:14:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:45.421 22:14:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:45.421 22:14:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.422 22:14:27 -- common/autotest_common.sh@10 -- # set +x 00:22:45.422 22:14:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.422 22:14:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.422 22:14:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.422 22:14:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.422 22:14:27 -- common/autotest_common.sh@10 -- # set +x 00:22:45.422 22:14:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.422 22:14:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:45.422 22:14:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:22:45.422 22:14:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:45.422 22:14:27 -- host/auth.sh@44 -- # digest=sha512 00:22:45.422 22:14:27 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:45.422 22:14:27 -- host/auth.sh@44 -- # keyid=3 00:22:45.422 22:14:27 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:45.422 22:14:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:45.422 22:14:27 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:45.422 22:14:27 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:45.422 22:14:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:22:45.422 22:14:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:45.422 22:14:27 -- host/auth.sh@68 -- # digest=sha512 00:22:45.422 22:14:27 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:45.422 22:14:27 -- host/auth.sh@68 -- # keyid=3 00:22:45.422 22:14:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:45.422 22:14:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.422 22:14:27 -- common/autotest_common.sh@10 -- # set +x 00:22:45.422 22:14:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.422 22:14:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:45.422 22:14:27 -- nvmf/common.sh@717 -- # local ip 00:22:45.422 22:14:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:45.422 22:14:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:45.422 22:14:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.422 22:14:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.422 22:14:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:45.422 22:14:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:45.422 22:14:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:45.422 22:14:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:45.422 22:14:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:45.422 22:14:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:45.422 22:14:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.422 22:14:27 -- common/autotest_common.sh@10 -- # set +x 00:22:45.988 nvme0n1 00:22:45.988 22:14:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.988 22:14:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:45.988 22:14:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.988 22:14:28 -- common/autotest_common.sh@10 -- # set +x 00:22:45.988 22:14:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:45.988 22:14:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.988 22:14:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.988 22:14:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.988 22:14:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.988 22:14:28 -- common/autotest_common.sh@10 -- # set +x 00:22:45.988 22:14:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.988 22:14:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:45.988 22:14:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:22:45.988 22:14:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:45.988 22:14:28 -- host/auth.sh@44 -- # digest=sha512 00:22:45.988 22:14:28 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:45.988 22:14:28 -- host/auth.sh@44 -- # keyid=4 00:22:45.988 22:14:28 -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:45.988 22:14:28 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:45.988 22:14:28 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:45.988 22:14:28 -- host/auth.sh@49 -- # echo DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:45.988 22:14:28 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:22:45.988 22:14:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:45.988 22:14:28 -- host/auth.sh@68 -- # digest=sha512 00:22:45.988 22:14:28 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:45.988 22:14:28 -- host/auth.sh@68 -- # keyid=4 00:22:45.988 22:14:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:45.988 22:14:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.988 22:14:28 -- common/autotest_common.sh@10 -- # set +x 00:22:45.988 22:14:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.988 22:14:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:45.988 22:14:28 -- nvmf/common.sh@717 -- # local ip 00:22:45.988 22:14:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:45.988 22:14:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:45.988 22:14:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.988 22:14:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.988 22:14:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:45.988 22:14:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:45.988 22:14:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:45.988 22:14:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:45.988 22:14:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:45.988 22:14:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:45.989 22:14:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.989 22:14:28 -- common/autotest_common.sh@10 -- # set +x 00:22:46.554 nvme0n1 00:22:46.554 22:14:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.554 22:14:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.554 22:14:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:46.554 22:14:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.554 22:14:28 -- common/autotest_common.sh@10 -- # set +x 00:22:46.554 22:14:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.554 22:14:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.554 22:14:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.554 22:14:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.554 22:14:28 -- common/autotest_common.sh@10 -- # set +x 00:22:46.554 22:14:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.554 22:14:28 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:46.554 22:14:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:46.554 22:14:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:22:46.554 22:14:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:46.554 22:14:28 -- host/auth.sh@44 -- # digest=sha512 00:22:46.554 22:14:28 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:46.554 22:14:28 -- host/auth.sh@44 -- # keyid=0 00:22:46.554 22:14:28 -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:46.554 22:14:28 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:46.554 22:14:28 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:46.554 22:14:28 -- host/auth.sh@49 -- # echo DHHC-1:00:NTY5YjgyYmU3YWRiYWY4Y2VkMjBhMmQ3OWM1ZGY2ZTDW7l5U: 00:22:46.554 22:14:28 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:22:46.554 22:14:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:46.554 22:14:28 -- host/auth.sh@68 -- # digest=sha512 00:22:46.554 22:14:28 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:46.554 22:14:28 -- host/auth.sh@68 -- # keyid=0 00:22:46.554 22:14:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:46.554 22:14:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.554 22:14:28 -- common/autotest_common.sh@10 -- # set +x 00:22:46.554 22:14:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.554 22:14:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:46.554 22:14:28 -- nvmf/common.sh@717 -- # local ip 00:22:46.554 22:14:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:46.554 22:14:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:46.554 22:14:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.554 22:14:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.554 22:14:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:46.554 22:14:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:46.554 22:14:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:46.554 22:14:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:46.554 22:14:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:46.554 22:14:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:46.554 22:14:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.554 22:14:28 -- common/autotest_common.sh@10 -- # set +x 00:22:47.488 nvme0n1 00:22:47.488 22:14:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.488 22:14:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.488 22:14:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:47.488 22:14:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.488 22:14:29 -- common/autotest_common.sh@10 -- # set +x 00:22:47.488 22:14:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.746 22:14:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.746 22:14:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.746 22:14:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.746 22:14:29 -- common/autotest_common.sh@10 -- # set +x 00:22:47.746 22:14:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.746 22:14:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:47.746 22:14:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:22:47.746 22:14:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:47.746 22:14:29 -- host/auth.sh@44 -- # digest=sha512 00:22:47.746 22:14:29 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:47.746 22:14:29 -- host/auth.sh@44 -- # keyid=1 00:22:47.746 22:14:29 -- host/auth.sh@45 -- # key=DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:47.746 22:14:29 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:47.746 22:14:29 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:47.746 22:14:29 -- host/auth.sh@49 -- # echo DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:47.746 22:14:29 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:22:47.746 22:14:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:47.746 22:14:29 -- host/auth.sh@68 -- # digest=sha512 00:22:47.746 22:14:29 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:47.746 22:14:29 -- host/auth.sh@68 -- # keyid=1 00:22:47.746 22:14:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:47.746 22:14:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.746 22:14:29 -- common/autotest_common.sh@10 -- # set +x 00:22:47.746 22:14:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.746 22:14:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:47.746 22:14:29 -- nvmf/common.sh@717 -- # local ip 00:22:47.746 22:14:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:47.746 22:14:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:47.746 22:14:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.746 22:14:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.746 22:14:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:47.746 22:14:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:47.746 22:14:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:47.746 22:14:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:47.746 22:14:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:47.746 22:14:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:47.746 22:14:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.746 22:14:29 -- common/autotest_common.sh@10 -- # set +x 00:22:48.712 nvme0n1 00:22:48.712 22:14:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.712 22:14:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.712 22:14:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:48.712 22:14:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.712 22:14:30 -- common/autotest_common.sh@10 -- # set +x 00:22:48.712 22:14:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.712 22:14:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.712 22:14:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.712 22:14:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.712 22:14:30 -- common/autotest_common.sh@10 -- # set +x 00:22:48.712 22:14:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.712 22:14:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:48.712 22:14:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:22:48.712 22:14:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:48.712 22:14:30 -- host/auth.sh@44 -- # digest=sha512 00:22:48.712 22:14:30 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:48.712 22:14:30 -- host/auth.sh@44 -- # keyid=2 00:22:48.712 22:14:30 -- host/auth.sh@45 -- # key=DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:48.712 22:14:30 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:48.712 22:14:30 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:48.712 22:14:30 -- host/auth.sh@49 -- # echo DHHC-1:01:YWMwYjM0OTBiY2FkYjYyMTFjYmYyY2VmNjJiODE5ZjKE0rWf: 00:22:48.712 22:14:30 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:22:48.712 22:14:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:48.712 22:14:30 -- host/auth.sh@68 -- # digest=sha512 00:22:48.712 22:14:30 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:48.712 22:14:30 -- host/auth.sh@68 -- # keyid=2 00:22:48.712 22:14:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:48.712 22:14:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.712 22:14:30 -- common/autotest_common.sh@10 -- # set +x 00:22:48.712 22:14:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.712 22:14:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:48.712 22:14:30 -- nvmf/common.sh@717 -- # local ip 00:22:48.712 22:14:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:48.712 22:14:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:48.712 22:14:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.712 22:14:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.712 22:14:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:48.712 22:14:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:48.712 22:14:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:48.712 22:14:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:48.712 22:14:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:48.712 22:14:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:48.712 22:14:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.712 22:14:30 -- common/autotest_common.sh@10 -- # set +x 00:22:49.646 nvme0n1 00:22:49.646 22:14:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.646 22:14:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.646 22:14:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.646 22:14:31 -- common/autotest_common.sh@10 -- # set +x 00:22:49.646 22:14:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:49.646 22:14:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.904 22:14:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.904 22:14:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:49.904 22:14:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.904 22:14:31 -- common/autotest_common.sh@10 -- # set +x 00:22:49.904 22:14:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.904 22:14:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:49.904 22:14:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:22:49.904 22:14:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:49.904 22:14:31 -- host/auth.sh@44 -- # digest=sha512 00:22:49.904 22:14:31 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:49.904 22:14:31 -- host/auth.sh@44 -- # keyid=3 00:22:49.904 22:14:31 -- host/auth.sh@45 -- # key=DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:49.904 22:14:31 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:49.904 22:14:31 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:49.904 22:14:31 -- host/auth.sh@49 -- # echo DHHC-1:02:ZTY5MGQ2NjRjYjE1MTYxZjc1YjIwMTIwNWE5ZTU1NWRiNTE2MmU0NjYxOTQ5YjllpGJPUA==: 00:22:49.904 22:14:31 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:22:49.904 22:14:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:49.904 22:14:31 -- host/auth.sh@68 -- # digest=sha512 00:22:49.904 22:14:31 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:49.904 22:14:31 -- host/auth.sh@68 -- # keyid=3 00:22:49.904 22:14:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:49.904 22:14:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.904 22:14:31 -- common/autotest_common.sh@10 -- # set +x 00:22:49.904 22:14:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.904 22:14:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:49.904 22:14:31 -- nvmf/common.sh@717 -- # local ip 00:22:49.904 22:14:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:49.904 22:14:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:49.904 22:14:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:49.904 22:14:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:49.904 22:14:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:49.904 22:14:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:49.904 22:14:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:49.904 22:14:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:49.904 22:14:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:49.904 22:14:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:49.904 22:14:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.904 22:14:31 -- common/autotest_common.sh@10 -- # set +x 00:22:50.839 nvme0n1 00:22:50.839 22:14:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.839 22:14:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:50.839 22:14:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.839 22:14:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:50.839 22:14:32 -- common/autotest_common.sh@10 -- # set +x 00:22:50.839 22:14:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.839 22:14:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.839 22:14:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:50.839 22:14:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.839 22:14:32 -- common/autotest_common.sh@10 -- # set +x 00:22:50.839 22:14:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.839 22:14:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:50.839 22:14:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:22:50.839 22:14:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:50.839 22:14:33 -- host/auth.sh@44 -- # digest=sha512 00:22:50.839 22:14:33 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:50.839 22:14:33 -- host/auth.sh@44 -- # keyid=4 00:22:50.839 22:14:33 -- host/auth.sh@45 -- # key=DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:50.839 22:14:33 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:50.839 22:14:33 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:50.839 22:14:33 -- host/auth.sh@49 -- # echo DHHC-1:03:NTAyNGFhYjgwMzkzZTUzMThkZGU1MjU0YzlkNDk0YjA1Y2VhNzM4YTUxYTY3ZDZhYTZkYWUyMzk1N2M4MjdjM1fynCE=: 00:22:50.839 22:14:33 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:22:50.839 22:14:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:50.839 22:14:33 -- host/auth.sh@68 -- # digest=sha512 00:22:50.839 22:14:33 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:50.839 22:14:33 -- host/auth.sh@68 -- # keyid=4 00:22:50.839 22:14:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:50.839 22:14:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.839 22:14:33 -- common/autotest_common.sh@10 -- # set +x 00:22:50.839 22:14:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.839 22:14:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:50.839 22:14:33 -- nvmf/common.sh@717 -- # local ip 00:22:50.839 22:14:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:50.839 22:14:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:50.839 22:14:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:50.839 22:14:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:50.839 22:14:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:50.839 22:14:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:50.839 22:14:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:50.839 22:14:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:50.839 22:14:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:50.839 22:14:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:50.839 22:14:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.839 22:14:33 -- common/autotest_common.sh@10 -- # set +x 00:22:52.211 nvme0n1 00:22:52.211 22:14:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:52.211 22:14:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:52.211 22:14:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:52.211 22:14:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:52.211 22:14:34 -- common/autotest_common.sh@10 -- # set +x 00:22:52.211 22:14:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:52.211 22:14:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.211 22:14:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:52.211 22:14:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:52.211 22:14:34 -- common/autotest_common.sh@10 -- # set +x 00:22:52.211 22:14:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:52.211 22:14:34 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:52.211 22:14:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:52.211 22:14:34 -- host/auth.sh@44 -- # digest=sha256 00:22:52.211 22:14:34 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:52.211 22:14:34 -- host/auth.sh@44 -- # keyid=1 00:22:52.211 22:14:34 -- host/auth.sh@45 -- # key=DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:52.211 22:14:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:52.211 22:14:34 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:52.211 22:14:34 -- host/auth.sh@49 -- # echo DHHC-1:00:MjFiNjdkNWYzZWRiYjQzMjUxMmNlYjBlZTZmNzlhZjgxOTBmYjYwMWJjYmQ1NDc091GbQA==: 00:22:52.211 22:14:34 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:52.211 22:14:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:52.211 22:14:34 -- common/autotest_common.sh@10 -- # set +x 00:22:52.211 22:14:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:52.211 22:14:34 -- host/auth.sh@119 -- # get_main_ns_ip 00:22:52.211 22:14:34 -- nvmf/common.sh@717 -- # local ip 00:22:52.211 22:14:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:52.211 22:14:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:52.211 22:14:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:52.211 22:14:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:52.211 22:14:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:52.211 22:14:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:52.211 22:14:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:52.211 22:14:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:52.211 22:14:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:52.211 22:14:34 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:52.211 22:14:34 -- common/autotest_common.sh@638 -- # local es=0 00:22:52.211 22:14:34 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:52.212 22:14:34 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:22:52.212 22:14:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:52.212 22:14:34 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:22:52.212 22:14:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:52.212 22:14:34 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:52.212 22:14:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:52.212 22:14:34 -- common/autotest_common.sh@10 -- # set +x 00:22:52.212 request: 00:22:52.212 { 00:22:52.212 "name": "nvme0", 00:22:52.212 "trtype": "tcp", 00:22:52.212 "traddr": "10.0.0.1", 00:22:52.212 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:52.212 "adrfam": "ipv4", 00:22:52.212 "trsvcid": "4420", 00:22:52.212 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:52.212 "method": "bdev_nvme_attach_controller", 00:22:52.212 "req_id": 1 00:22:52.212 } 00:22:52.212 Got JSON-RPC error response 00:22:52.212 response: 00:22:52.212 { 00:22:52.212 "code": -32602, 00:22:52.212 "message": "Invalid parameters" 00:22:52.212 } 00:22:52.212 22:14:34 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:22:52.212 22:14:34 -- common/autotest_common.sh@641 -- # es=1 00:22:52.212 22:14:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:52.212 22:14:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:52.212 22:14:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:52.212 22:14:34 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:22:52.212 22:14:34 -- host/auth.sh@121 -- # jq length 00:22:52.212 22:14:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:52.212 22:14:34 -- common/autotest_common.sh@10 -- # set +x 00:22:52.212 22:14:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:52.212 22:14:34 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:22:52.212 22:14:34 -- host/auth.sh@124 -- # get_main_ns_ip 00:22:52.212 22:14:34 -- nvmf/common.sh@717 -- # local ip 00:22:52.212 22:14:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:52.212 22:14:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:52.212 22:14:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:52.212 22:14:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:52.212 22:14:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:52.212 22:14:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:52.212 22:14:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:52.212 22:14:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:52.212 22:14:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:52.212 22:14:34 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:52.212 22:14:34 -- common/autotest_common.sh@638 -- # local es=0 00:22:52.212 22:14:34 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:52.212 22:14:34 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:22:52.212 22:14:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:52.212 22:14:34 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:22:52.212 22:14:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:52.212 22:14:34 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:52.212 22:14:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:52.212 22:14:34 -- common/autotest_common.sh@10 -- # set +x 00:22:52.212 request: 00:22:52.212 { 00:22:52.212 "name": "nvme0", 00:22:52.212 "trtype": "tcp", 00:22:52.212 "traddr": "10.0.0.1", 00:22:52.212 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:52.212 "adrfam": "ipv4", 00:22:52.212 "trsvcid": "4420", 00:22:52.212 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:52.212 "dhchap_key": "key2", 00:22:52.212 "method": "bdev_nvme_attach_controller", 00:22:52.212 "req_id": 1 00:22:52.212 } 00:22:52.212 Got JSON-RPC error response 00:22:52.212 response: 00:22:52.212 { 00:22:52.212 "code": -32602, 00:22:52.212 "message": "Invalid parameters" 00:22:52.212 } 00:22:52.212 22:14:34 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:22:52.212 22:14:34 -- common/autotest_common.sh@641 -- # es=1 00:22:52.212 22:14:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:52.212 22:14:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:52.212 22:14:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:52.212 22:14:34 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:22:52.212 22:14:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:52.212 22:14:34 -- common/autotest_common.sh@10 -- # set +x 00:22:52.212 22:14:34 -- host/auth.sh@127 -- # jq length 00:22:52.212 22:14:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:52.212 22:14:34 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:22:52.212 22:14:34 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:22:52.212 22:14:34 -- host/auth.sh@130 -- # cleanup 00:22:52.212 22:14:34 -- host/auth.sh@24 -- # nvmftestfini 00:22:52.212 22:14:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:52.212 22:14:34 -- nvmf/common.sh@117 -- # sync 00:22:52.212 22:14:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:52.212 22:14:34 -- nvmf/common.sh@120 -- # set +e 00:22:52.212 22:14:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:52.212 22:14:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:52.212 rmmod nvme_tcp 00:22:52.470 rmmod nvme_fabrics 00:22:52.470 22:14:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:52.470 22:14:34 -- nvmf/common.sh@124 -- # set -e 00:22:52.470 22:14:34 -- nvmf/common.sh@125 -- # return 0 00:22:52.470 22:14:34 -- nvmf/common.sh@478 -- # '[' -n 4018354 ']' 00:22:52.470 22:14:34 -- nvmf/common.sh@479 -- # killprocess 4018354 00:22:52.470 22:14:34 -- common/autotest_common.sh@936 -- # '[' -z 4018354 ']' 00:22:52.470 22:14:34 -- common/autotest_common.sh@940 -- # kill -0 4018354 00:22:52.470 22:14:34 -- common/autotest_common.sh@941 -- # uname 00:22:52.470 22:14:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:52.470 22:14:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4018354 00:22:52.470 22:14:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:52.470 22:14:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:52.470 22:14:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4018354' 00:22:52.470 killing process with pid 4018354 00:22:52.470 22:14:34 -- common/autotest_common.sh@955 -- # kill 4018354 00:22:52.470 22:14:34 -- common/autotest_common.sh@960 -- # wait 4018354 00:22:52.728 22:14:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:52.728 22:14:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:52.728 22:14:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:52.728 22:14:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:52.728 22:14:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:52.728 22:14:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.728 22:14:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:52.728 22:14:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.628 22:14:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:54.628 22:14:36 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:54.628 22:14:36 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:54.628 22:14:36 -- host/auth.sh@27 -- # clean_kernel_target 00:22:54.628 22:14:36 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:22:54.628 22:14:36 -- nvmf/common.sh@675 -- # echo 0 00:22:54.628 22:14:36 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:54.628 22:14:36 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:54.628 22:14:36 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:54.628 22:14:36 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:54.628 22:14:36 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:22:54.628 22:14:36 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:22:54.886 22:14:36 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:56.260 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:56.260 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:56.260 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:56.260 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:56.260 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:56.260 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:56.260 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:56.260 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:56.260 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:56.260 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:56.260 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:56.260 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:56.260 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:56.260 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:56.260 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:56.260 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:57.193 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:22:57.451 22:14:39 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.6U8 /tmp/spdk.key-null.5s8 /tmp/spdk.key-sha256.Bmq /tmp/spdk.key-sha384.fNi /tmp/spdk.key-sha512.yMw /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:22:57.451 22:14:39 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:58.824 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:22:58.824 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:22:58.824 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:22:58.824 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:22:58.824 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:22:58.824 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:22:58.824 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:22:58.824 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:22:58.824 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:22:58.824 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:22:58.824 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:22:58.824 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:22:58.824 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:22:58.824 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:22:58.824 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:22:58.824 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:22:58.824 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:22:58.824 00:22:58.824 real 0m51.939s 00:22:58.824 user 0m49.992s 00:22:58.824 sys 0m6.399s 00:22:58.824 22:14:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:58.824 22:14:40 -- common/autotest_common.sh@10 -- # set +x 00:22:58.824 ************************************ 00:22:58.824 END TEST nvmf_auth 00:22:58.824 ************************************ 00:22:58.824 22:14:40 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:22:58.824 22:14:40 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:58.824 22:14:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:58.824 22:14:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:58.824 22:14:40 -- common/autotest_common.sh@10 -- # set +x 00:22:59.082 ************************************ 00:22:59.082 START TEST nvmf_digest 00:22:59.082 ************************************ 00:22:59.082 22:14:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:59.082 * Looking for test storage... 00:22:59.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:59.082 22:14:41 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:59.082 22:14:41 -- nvmf/common.sh@7 -- # uname -s 00:22:59.082 22:14:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:59.082 22:14:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:59.082 22:14:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:59.082 22:14:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:59.082 22:14:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:59.082 22:14:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:59.083 22:14:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:59.083 22:14:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:59.083 22:14:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:59.083 22:14:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:59.083 22:14:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:59.083 22:14:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:59.083 22:14:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:59.083 22:14:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:59.083 22:14:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:59.083 22:14:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:59.083 22:14:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:59.083 22:14:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:59.083 22:14:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.083 22:14:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.083 22:14:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.083 22:14:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.083 22:14:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.083 22:14:41 -- paths/export.sh@5 -- # export PATH 00:22:59.083 22:14:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.083 22:14:41 -- nvmf/common.sh@47 -- # : 0 00:22:59.083 22:14:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:59.083 22:14:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:59.083 22:14:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:59.083 22:14:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:59.083 22:14:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:59.083 22:14:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:59.083 22:14:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:59.083 22:14:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:59.083 22:14:41 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:59.083 22:14:41 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:59.083 22:14:41 -- host/digest.sh@16 -- # runtime=2 00:22:59.083 22:14:41 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:22:59.083 22:14:41 -- host/digest.sh@138 -- # nvmftestinit 00:22:59.083 22:14:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:59.083 22:14:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.083 22:14:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:59.083 22:14:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:59.083 22:14:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:59.083 22:14:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.083 22:14:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:59.083 22:14:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.083 22:14:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:59.083 22:14:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:59.083 22:14:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:59.083 22:14:41 -- common/autotest_common.sh@10 -- # set +x 00:23:01.609 22:14:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:01.609 22:14:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:01.609 22:14:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:01.609 22:14:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:01.609 22:14:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:01.609 22:14:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:01.609 22:14:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:01.609 22:14:43 -- nvmf/common.sh@295 -- # net_devs=() 00:23:01.609 22:14:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:01.609 22:14:43 -- nvmf/common.sh@296 -- # e810=() 00:23:01.609 22:14:43 -- nvmf/common.sh@296 -- # local -ga e810 00:23:01.609 22:14:43 -- nvmf/common.sh@297 -- # x722=() 00:23:01.609 22:14:43 -- nvmf/common.sh@297 -- # local -ga x722 00:23:01.609 22:14:43 -- nvmf/common.sh@298 -- # mlx=() 00:23:01.609 22:14:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:01.609 22:14:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:01.609 22:14:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:01.609 22:14:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:01.610 22:14:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:01.610 22:14:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:01.610 22:14:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:01.610 22:14:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:01.610 22:14:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:01.610 22:14:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:01.610 22:14:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:01.610 22:14:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:01.610 22:14:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:01.610 22:14:43 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:01.610 22:14:43 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:01.610 22:14:43 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:01.610 22:14:43 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:01.610 22:14:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:01.610 22:14:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.610 22:14:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:01.610 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:01.610 22:14:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.610 22:14:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.610 22:14:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.610 22:14:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.610 22:14:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.610 22:14:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.610 22:14:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:01.610 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:01.610 22:14:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.610 22:14:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.610 22:14:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.610 22:14:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.610 22:14:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.610 22:14:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:01.610 22:14:43 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:01.610 22:14:43 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:01.610 22:14:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.610 22:14:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.610 22:14:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:01.610 22:14:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.610 22:14:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:01.610 Found net devices under 0000:84:00.0: cvl_0_0 00:23:01.610 22:14:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.610 22:14:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.610 22:14:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.610 22:14:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:01.610 22:14:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.610 22:14:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:01.610 Found net devices under 0000:84:00.1: cvl_0_1 00:23:01.610 22:14:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.610 22:14:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:01.610 22:14:43 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:01.610 22:14:43 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:01.610 22:14:43 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:01.610 22:14:43 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:01.610 22:14:43 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:01.610 22:14:43 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:01.610 22:14:43 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:01.610 22:14:43 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:01.610 22:14:43 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:01.610 22:14:43 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:01.610 22:14:43 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:01.610 22:14:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:01.610 22:14:43 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:01.610 22:14:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:01.610 22:14:43 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:01.610 22:14:43 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:01.610 22:14:43 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:01.610 22:14:43 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:01.610 22:14:43 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:01.610 22:14:43 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:01.610 22:14:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:01.610 22:14:43 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:01.610 22:14:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:01.610 22:14:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:01.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:01.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:23:01.610 00:23:01.610 --- 10.0.0.2 ping statistics --- 00:23:01.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.610 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:23:01.610 22:14:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:01.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:01.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:23:01.610 00:23:01.610 --- 10.0.0.1 ping statistics --- 00:23:01.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.610 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:23:01.610 22:14:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:01.610 22:14:43 -- nvmf/common.sh@411 -- # return 0 00:23:01.610 22:14:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:01.610 22:14:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:01.610 22:14:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:01.610 22:14:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:01.610 22:14:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:01.610 22:14:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:01.610 22:14:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:01.610 22:14:43 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:01.610 22:14:43 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:23:01.610 22:14:43 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:23:01.610 22:14:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:01.610 22:14:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:01.610 22:14:43 -- common/autotest_common.sh@10 -- # set +x 00:23:01.610 ************************************ 00:23:01.610 START TEST nvmf_digest_clean 00:23:01.610 ************************************ 00:23:01.610 22:14:43 -- common/autotest_common.sh@1111 -- # run_digest 00:23:01.610 22:14:43 -- host/digest.sh@120 -- # local dsa_initiator 00:23:01.610 22:14:43 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:23:01.610 22:14:43 -- host/digest.sh@121 -- # dsa_initiator=false 00:23:01.610 22:14:43 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:23:01.610 22:14:43 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:23:01.610 22:14:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:01.610 22:14:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:01.610 22:14:43 -- common/autotest_common.sh@10 -- # set +x 00:23:01.610 22:14:43 -- nvmf/common.sh@470 -- # nvmfpid=4028068 00:23:01.610 22:14:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:01.610 22:14:43 -- nvmf/common.sh@471 -- # waitforlisten 4028068 00:23:01.610 22:14:43 -- common/autotest_common.sh@817 -- # '[' -z 4028068 ']' 00:23:01.610 22:14:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.610 22:14:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:01.610 22:14:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.610 22:14:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:01.610 22:14:43 -- common/autotest_common.sh@10 -- # set +x 00:23:01.610 [2024-04-24 22:14:43.854600] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:23:01.610 [2024-04-24 22:14:43.854694] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.868 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.868 [2024-04-24 22:14:43.930749] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.868 [2024-04-24 22:14:44.052917] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.868 [2024-04-24 22:14:44.052984] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.868 [2024-04-24 22:14:44.053000] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.868 [2024-04-24 22:14:44.053022] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.868 [2024-04-24 22:14:44.053035] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.868 [2024-04-24 22:14:44.053067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.868 22:14:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:01.868 22:14:44 -- common/autotest_common.sh@850 -- # return 0 00:23:01.868 22:14:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:01.868 22:14:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:01.868 22:14:44 -- common/autotest_common.sh@10 -- # set +x 00:23:01.869 22:14:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.869 22:14:44 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:23:01.869 22:14:44 -- host/digest.sh@126 -- # common_target_config 00:23:01.869 22:14:44 -- host/digest.sh@43 -- # rpc_cmd 00:23:01.869 22:14:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.869 22:14:44 -- common/autotest_common.sh@10 -- # set +x 00:23:02.127 null0 00:23:02.127 [2024-04-24 22:14:44.236804] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.127 [2024-04-24 22:14:44.260783] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:02.127 [2024-04-24 22:14:44.261097] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.127 22:14:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.127 22:14:44 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:23:02.127 22:14:44 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:02.127 22:14:44 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:02.127 22:14:44 -- host/digest.sh@80 -- # rw=randread 00:23:02.127 22:14:44 -- host/digest.sh@80 -- # bs=4096 00:23:02.127 22:14:44 -- host/digest.sh@80 -- # qd=128 00:23:02.127 22:14:44 -- host/digest.sh@80 -- # scan_dsa=false 00:23:02.127 22:14:44 -- host/digest.sh@83 -- # bperfpid=4028208 00:23:02.127 22:14:44 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:02.127 22:14:44 -- host/digest.sh@84 -- # waitforlisten 4028208 /var/tmp/bperf.sock 00:23:02.127 22:14:44 -- common/autotest_common.sh@817 -- # '[' -z 4028208 ']' 00:23:02.127 22:14:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:02.127 22:14:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:02.127 22:14:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:02.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:02.127 22:14:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:02.127 22:14:44 -- common/autotest_common.sh@10 -- # set +x 00:23:02.127 [2024-04-24 22:14:44.311671] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:23:02.127 [2024-04-24 22:14:44.311750] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4028208 ] 00:23:02.127 EAL: No free 2048 kB hugepages reported on node 1 00:23:02.127 [2024-04-24 22:14:44.379943] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.386 [2024-04-24 22:14:44.501695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.644 22:14:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:02.644 22:14:44 -- common/autotest_common.sh@850 -- # return 0 00:23:02.644 22:14:44 -- host/digest.sh@86 -- # false 00:23:02.644 22:14:44 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:02.644 22:14:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:02.902 22:14:45 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:02.902 22:14:45 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:03.468 nvme0n1 00:23:03.468 22:14:45 -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:03.468 22:14:45 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:03.468 Running I/O for 2 seconds... 00:23:05.995 00:23:05.995 Latency(us) 00:23:05.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.995 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:05.995 nvme0n1 : 2.01 18133.98 70.84 0.00 0.00 7049.97 3640.89 23495.87 00:23:05.995 =================================================================================================================== 00:23:05.995 Total : 18133.98 70.84 0.00 0.00 7049.97 3640.89 23495.87 00:23:05.995 0 00:23:05.995 22:14:47 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:05.995 22:14:47 -- host/digest.sh@93 -- # get_accel_stats 00:23:05.995 22:14:47 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:05.995 22:14:47 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:05.995 | select(.opcode=="crc32c") 00:23:05.995 | "\(.module_name) \(.executed)"' 00:23:05.995 22:14:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:05.995 22:14:48 -- host/digest.sh@94 -- # false 00:23:05.995 22:14:48 -- host/digest.sh@94 -- # exp_module=software 00:23:05.995 22:14:48 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:05.995 22:14:48 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:05.995 22:14:48 -- host/digest.sh@98 -- # killprocess 4028208 00:23:05.995 22:14:48 -- common/autotest_common.sh@936 -- # '[' -z 4028208 ']' 00:23:05.995 22:14:48 -- common/autotest_common.sh@940 -- # kill -0 4028208 00:23:05.995 22:14:48 -- common/autotest_common.sh@941 -- # uname 00:23:05.995 22:14:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:05.995 22:14:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4028208 00:23:05.995 22:14:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:05.995 22:14:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:05.995 22:14:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4028208' 00:23:05.995 killing process with pid 4028208 00:23:05.995 22:14:48 -- common/autotest_common.sh@955 -- # kill 4028208 00:23:05.995 Received shutdown signal, test time was about 2.000000 seconds 00:23:05.995 00:23:05.995 Latency(us) 00:23:05.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.995 =================================================================================================================== 00:23:05.995 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:05.995 22:14:48 -- common/autotest_common.sh@960 -- # wait 4028208 00:23:06.560 22:14:48 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:23:06.560 22:14:48 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:06.560 22:14:48 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:06.560 22:14:48 -- host/digest.sh@80 -- # rw=randread 00:23:06.560 22:14:48 -- host/digest.sh@80 -- # bs=131072 00:23:06.560 22:14:48 -- host/digest.sh@80 -- # qd=16 00:23:06.560 22:14:48 -- host/digest.sh@80 -- # scan_dsa=false 00:23:06.560 22:14:48 -- host/digest.sh@83 -- # bperfpid=4028629 00:23:06.560 22:14:48 -- host/digest.sh@84 -- # waitforlisten 4028629 /var/tmp/bperf.sock 00:23:06.560 22:14:48 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:06.560 22:14:48 -- common/autotest_common.sh@817 -- # '[' -z 4028629 ']' 00:23:06.560 22:14:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:06.560 22:14:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:06.560 22:14:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:06.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:06.560 22:14:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:06.560 22:14:48 -- common/autotest_common.sh@10 -- # set +x 00:23:06.560 [2024-04-24 22:14:48.560093] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:23:06.560 [2024-04-24 22:14:48.560178] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4028629 ] 00:23:06.560 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:06.560 Zero copy mechanism will not be used. 00:23:06.560 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.560 [2024-04-24 22:14:48.628637] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.560 [2024-04-24 22:14:48.749527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.817 22:14:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:06.817 22:14:48 -- common/autotest_common.sh@850 -- # return 0 00:23:06.817 22:14:48 -- host/digest.sh@86 -- # false 00:23:06.817 22:14:48 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:06.817 22:14:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:07.075 22:14:49 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:07.075 22:14:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:08.010 nvme0n1 00:23:08.010 22:14:49 -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:08.010 22:14:49 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:08.010 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:08.010 Zero copy mechanism will not be used. 00:23:08.010 Running I/O for 2 seconds... 00:23:09.947 00:23:09.947 Latency(us) 00:23:09.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.947 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:09.947 nvme0n1 : 2.00 3239.88 404.98 0.00 0.00 4934.25 1334.99 14078.10 00:23:09.947 =================================================================================================================== 00:23:09.947 Total : 3239.88 404.98 0.00 0.00 4934.25 1334.99 14078.10 00:23:09.947 0 00:23:09.947 22:14:52 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:09.947 22:14:52 -- host/digest.sh@93 -- # get_accel_stats 00:23:09.947 22:14:52 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:09.947 22:14:52 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:09.947 | select(.opcode=="crc32c") 00:23:09.947 | "\(.module_name) \(.executed)"' 00:23:09.947 22:14:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:10.513 22:14:52 -- host/digest.sh@94 -- # false 00:23:10.513 22:14:52 -- host/digest.sh@94 -- # exp_module=software 00:23:10.513 22:14:52 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:10.513 22:14:52 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:10.513 22:14:52 -- host/digest.sh@98 -- # killprocess 4028629 00:23:10.513 22:14:52 -- common/autotest_common.sh@936 -- # '[' -z 4028629 ']' 00:23:10.513 22:14:52 -- common/autotest_common.sh@940 -- # kill -0 4028629 00:23:10.513 22:14:52 -- common/autotest_common.sh@941 -- # uname 00:23:10.513 22:14:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:10.513 22:14:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4028629 00:23:10.513 22:14:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:10.513 22:14:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:10.513 22:14:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4028629' 00:23:10.513 killing process with pid 4028629 00:23:10.513 22:14:52 -- common/autotest_common.sh@955 -- # kill 4028629 00:23:10.513 Received shutdown signal, test time was about 2.000000 seconds 00:23:10.513 00:23:10.513 Latency(us) 00:23:10.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.513 =================================================================================================================== 00:23:10.513 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:10.513 22:14:52 -- common/autotest_common.sh@960 -- # wait 4028629 00:23:10.770 22:14:52 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:23:10.770 22:14:52 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:10.770 22:14:52 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:10.770 22:14:52 -- host/digest.sh@80 -- # rw=randwrite 00:23:10.770 22:14:52 -- host/digest.sh@80 -- # bs=4096 00:23:10.770 22:14:52 -- host/digest.sh@80 -- # qd=128 00:23:10.770 22:14:52 -- host/digest.sh@80 -- # scan_dsa=false 00:23:10.770 22:14:52 -- host/digest.sh@83 -- # bperfpid=4029160 00:23:10.770 22:14:52 -- host/digest.sh@84 -- # waitforlisten 4029160 /var/tmp/bperf.sock 00:23:10.770 22:14:52 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:10.770 22:14:52 -- common/autotest_common.sh@817 -- # '[' -z 4029160 ']' 00:23:10.770 22:14:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:10.770 22:14:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:10.770 22:14:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:10.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:10.770 22:14:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:10.770 22:14:52 -- common/autotest_common.sh@10 -- # set +x 00:23:11.028 [2024-04-24 22:14:53.052301] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:23:11.028 [2024-04-24 22:14:53.052490] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4029160 ] 00:23:11.028 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.028 [2024-04-24 22:14:53.149658] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.028 [2024-04-24 22:14:53.270826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.286 22:14:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:11.286 22:14:53 -- common/autotest_common.sh@850 -- # return 0 00:23:11.286 22:14:53 -- host/digest.sh@86 -- # false 00:23:11.286 22:14:53 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:11.286 22:14:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:11.853 22:14:53 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:11.853 22:14:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:12.419 nvme0n1 00:23:12.419 22:14:54 -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:12.419 22:14:54 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:12.419 Running I/O for 2 seconds... 00:23:14.320 00:23:14.320 Latency(us) 00:23:14.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.320 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:14.320 nvme0n1 : 2.01 19698.00 76.95 0.00 0.00 6487.76 3422.44 12281.93 00:23:14.320 =================================================================================================================== 00:23:14.320 Total : 19698.00 76.95 0.00 0.00 6487.76 3422.44 12281.93 00:23:14.320 0 00:23:14.320 22:14:56 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:14.320 22:14:56 -- host/digest.sh@93 -- # get_accel_stats 00:23:14.320 22:14:56 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:14.320 22:14:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:14.320 22:14:56 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:14.320 | select(.opcode=="crc32c") 00:23:14.320 | "\(.module_name) \(.executed)"' 00:23:14.886 22:14:56 -- host/digest.sh@94 -- # false 00:23:14.886 22:14:56 -- host/digest.sh@94 -- # exp_module=software 00:23:14.886 22:14:56 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:14.886 22:14:56 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:14.886 22:14:56 -- host/digest.sh@98 -- # killprocess 4029160 00:23:14.886 22:14:56 -- common/autotest_common.sh@936 -- # '[' -z 4029160 ']' 00:23:14.886 22:14:56 -- common/autotest_common.sh@940 -- # kill -0 4029160 00:23:14.886 22:14:56 -- common/autotest_common.sh@941 -- # uname 00:23:14.886 22:14:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:14.886 22:14:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4029160 00:23:14.886 22:14:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:14.886 22:14:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:14.886 22:14:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4029160' 00:23:14.886 killing process with pid 4029160 00:23:14.886 22:14:56 -- common/autotest_common.sh@955 -- # kill 4029160 00:23:14.886 Received shutdown signal, test time was about 2.000000 seconds 00:23:14.886 00:23:14.886 Latency(us) 00:23:14.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.886 =================================================================================================================== 00:23:14.886 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:14.886 22:14:56 -- common/autotest_common.sh@960 -- # wait 4029160 00:23:15.143 22:14:57 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:23:15.143 22:14:57 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:15.143 22:14:57 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:15.143 22:14:57 -- host/digest.sh@80 -- # rw=randwrite 00:23:15.143 22:14:57 -- host/digest.sh@80 -- # bs=131072 00:23:15.143 22:14:57 -- host/digest.sh@80 -- # qd=16 00:23:15.143 22:14:57 -- host/digest.sh@80 -- # scan_dsa=false 00:23:15.143 22:14:57 -- host/digest.sh@83 -- # bperfpid=4029695 00:23:15.143 22:14:57 -- host/digest.sh@84 -- # waitforlisten 4029695 /var/tmp/bperf.sock 00:23:15.143 22:14:57 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:15.143 22:14:57 -- common/autotest_common.sh@817 -- # '[' -z 4029695 ']' 00:23:15.143 22:14:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:15.143 22:14:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:15.143 22:14:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:15.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:15.143 22:14:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:15.143 22:14:57 -- common/autotest_common.sh@10 -- # set +x 00:23:15.143 [2024-04-24 22:14:57.284739] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:23:15.143 [2024-04-24 22:14:57.284838] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4029695 ] 00:23:15.143 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:15.143 Zero copy mechanism will not be used. 00:23:15.143 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.143 [2024-04-24 22:14:57.359352] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.401 [2024-04-24 22:14:57.474925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.401 22:14:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:15.401 22:14:57 -- common/autotest_common.sh@850 -- # return 0 00:23:15.401 22:14:57 -- host/digest.sh@86 -- # false 00:23:15.401 22:14:57 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:15.401 22:14:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:15.968 22:14:58 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:15.968 22:14:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:16.225 nvme0n1 00:23:16.225 22:14:58 -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:16.225 22:14:58 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:16.483 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:16.483 Zero copy mechanism will not be used. 00:23:16.483 Running I/O for 2 seconds... 00:23:18.382 00:23:18.382 Latency(us) 00:23:18.382 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.382 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:18.382 nvme0n1 : 2.00 3037.66 379.71 0.00 0.00 5255.54 4004.98 9126.49 00:23:18.383 =================================================================================================================== 00:23:18.383 Total : 3037.66 379.71 0.00 0.00 5255.54 4004.98 9126.49 00:23:18.383 0 00:23:18.383 22:15:00 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:18.383 22:15:00 -- host/digest.sh@93 -- # get_accel_stats 00:23:18.383 22:15:00 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:18.383 22:15:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:18.383 22:15:00 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:18.383 | select(.opcode=="crc32c") 00:23:18.383 | "\(.module_name) \(.executed)"' 00:23:18.949 22:15:00 -- host/digest.sh@94 -- # false 00:23:18.949 22:15:00 -- host/digest.sh@94 -- # exp_module=software 00:23:18.949 22:15:00 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:18.949 22:15:00 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:18.949 22:15:00 -- host/digest.sh@98 -- # killprocess 4029695 00:23:18.949 22:15:00 -- common/autotest_common.sh@936 -- # '[' -z 4029695 ']' 00:23:18.949 22:15:00 -- common/autotest_common.sh@940 -- # kill -0 4029695 00:23:18.949 22:15:00 -- common/autotest_common.sh@941 -- # uname 00:23:18.949 22:15:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:18.949 22:15:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4029695 00:23:18.949 22:15:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:18.949 22:15:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:18.949 22:15:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4029695' 00:23:18.949 killing process with pid 4029695 00:23:18.949 22:15:01 -- common/autotest_common.sh@955 -- # kill 4029695 00:23:18.949 Received shutdown signal, test time was about 2.000000 seconds 00:23:18.949 00:23:18.949 Latency(us) 00:23:18.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.949 =================================================================================================================== 00:23:18.949 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.949 22:15:01 -- common/autotest_common.sh@960 -- # wait 4029695 00:23:19.207 22:15:01 -- host/digest.sh@132 -- # killprocess 4028068 00:23:19.207 22:15:01 -- common/autotest_common.sh@936 -- # '[' -z 4028068 ']' 00:23:19.207 22:15:01 -- common/autotest_common.sh@940 -- # kill -0 4028068 00:23:19.207 22:15:01 -- common/autotest_common.sh@941 -- # uname 00:23:19.207 22:15:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:19.207 22:15:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4028068 00:23:19.207 22:15:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:19.207 22:15:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:19.207 22:15:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4028068' 00:23:19.207 killing process with pid 4028068 00:23:19.207 22:15:01 -- common/autotest_common.sh@955 -- # kill 4028068 00:23:19.207 [2024-04-24 22:15:01.368111] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:19.207 22:15:01 -- common/autotest_common.sh@960 -- # wait 4028068 00:23:19.465 00:23:19.465 real 0m17.847s 00:23:19.465 user 0m36.948s 00:23:19.465 sys 0m4.891s 00:23:19.465 22:15:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:19.465 22:15:01 -- common/autotest_common.sh@10 -- # set +x 00:23:19.465 ************************************ 00:23:19.465 END TEST nvmf_digest_clean 00:23:19.465 ************************************ 00:23:19.465 22:15:01 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:23:19.465 22:15:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:19.465 22:15:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:19.465 22:15:01 -- common/autotest_common.sh@10 -- # set +x 00:23:19.724 ************************************ 00:23:19.724 START TEST nvmf_digest_error 00:23:19.724 ************************************ 00:23:19.724 22:15:01 -- common/autotest_common.sh@1111 -- # run_digest_error 00:23:19.724 22:15:01 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:23:19.724 22:15:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:19.724 22:15:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:19.724 22:15:01 -- common/autotest_common.sh@10 -- # set +x 00:23:19.724 22:15:01 -- nvmf/common.sh@470 -- # nvmfpid=4030349 00:23:19.724 22:15:01 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:19.724 22:15:01 -- nvmf/common.sh@471 -- # waitforlisten 4030349 00:23:19.724 22:15:01 -- common/autotest_common.sh@817 -- # '[' -z 4030349 ']' 00:23:19.724 22:15:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.724 22:15:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:19.724 22:15:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.724 22:15:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:19.724 22:15:01 -- common/autotest_common.sh@10 -- # set +x 00:23:19.724 [2024-04-24 22:15:01.837333] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:23:19.724 [2024-04-24 22:15:01.837433] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.724 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.724 [2024-04-24 22:15:01.911938] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.982 [2024-04-24 22:15:02.030763] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:19.982 [2024-04-24 22:15:02.030830] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:19.982 [2024-04-24 22:15:02.030846] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:19.982 [2024-04-24 22:15:02.030860] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:19.982 [2024-04-24 22:15:02.030872] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:19.982 [2024-04-24 22:15:02.030910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.982 22:15:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:19.982 22:15:02 -- common/autotest_common.sh@850 -- # return 0 00:23:19.982 22:15:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:19.982 22:15:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:19.982 22:15:02 -- common/autotest_common.sh@10 -- # set +x 00:23:19.982 22:15:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:19.982 22:15:02 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:23:19.982 22:15:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.982 22:15:02 -- common/autotest_common.sh@10 -- # set +x 00:23:19.982 [2024-04-24 22:15:02.103501] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:23:19.982 22:15:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.982 22:15:02 -- host/digest.sh@105 -- # common_target_config 00:23:19.982 22:15:02 -- host/digest.sh@43 -- # rpc_cmd 00:23:19.982 22:15:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.982 22:15:02 -- common/autotest_common.sh@10 -- # set +x 00:23:19.982 null0 00:23:19.982 [2024-04-24 22:15:02.219134] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.240 [2024-04-24 22:15:02.243125] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:20.240 [2024-04-24 22:15:02.243435] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:20.240 22:15:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.240 22:15:02 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:23:20.240 22:15:02 -- host/digest.sh@54 -- # local rw bs qd 00:23:20.240 22:15:02 -- host/digest.sh@56 -- # rw=randread 00:23:20.240 22:15:02 -- host/digest.sh@56 -- # bs=4096 00:23:20.240 22:15:02 -- host/digest.sh@56 -- # qd=128 00:23:20.240 22:15:02 -- host/digest.sh@58 -- # bperfpid=4030393 00:23:20.240 22:15:02 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:23:20.240 22:15:02 -- host/digest.sh@60 -- # waitforlisten 4030393 /var/tmp/bperf.sock 00:23:20.240 22:15:02 -- common/autotest_common.sh@817 -- # '[' -z 4030393 ']' 00:23:20.240 22:15:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:20.240 22:15:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:20.240 22:15:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:20.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:20.240 22:15:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:20.240 22:15:02 -- common/autotest_common.sh@10 -- # set +x 00:23:20.240 [2024-04-24 22:15:02.292388] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:23:20.240 [2024-04-24 22:15:02.292475] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4030393 ] 00:23:20.240 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.240 [2024-04-24 22:15:02.363082] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.240 [2024-04-24 22:15:02.481456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.498 22:15:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:20.498 22:15:02 -- common/autotest_common.sh@850 -- # return 0 00:23:20.498 22:15:02 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:20.498 22:15:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:20.755 22:15:02 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:20.755 22:15:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.755 22:15:02 -- common/autotest_common.sh@10 -- # set +x 00:23:20.755 22:15:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.755 22:15:02 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:20.755 22:15:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:21.328 nvme0n1 00:23:21.328 22:15:03 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:21.328 22:15:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:21.328 22:15:03 -- common/autotest_common.sh@10 -- # set +x 00:23:21.328 22:15:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:21.328 22:15:03 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:21.328 22:15:03 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:21.328 Running I/O for 2 seconds... 00:23:21.587 [2024-04-24 22:15:03.595464] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.587 [2024-04-24 22:15:03.595516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.587 [2024-04-24 22:15:03.595539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.587 [2024-04-24 22:15:03.608299] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.587 [2024-04-24 22:15:03.608334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.587 [2024-04-24 22:15:03.608353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.587 [2024-04-24 22:15:03.625888] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.587 [2024-04-24 22:15:03.625923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.587 [2024-04-24 22:15:03.625942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.587 [2024-04-24 22:15:03.641949] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.587 [2024-04-24 22:15:03.641984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.587 [2024-04-24 22:15:03.642004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.587 [2024-04-24 22:15:03.655751] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.587 [2024-04-24 22:15:03.655786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.587 [2024-04-24 22:15:03.655805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.587 [2024-04-24 22:15:03.669874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.587 [2024-04-24 22:15:03.669908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.587 [2024-04-24 22:15:03.669928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.587 [2024-04-24 22:15:03.683451] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.587 [2024-04-24 22:15:03.683485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.587 [2024-04-24 22:15:03.683504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.587 [2024-04-24 22:15:03.695813] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.587 [2024-04-24 22:15:03.695847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.587 [2024-04-24 22:15:03.695866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.587 [2024-04-24 22:15:03.714500] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.588 [2024-04-24 22:15:03.714541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.588 [2024-04-24 22:15:03.714561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.588 [2024-04-24 22:15:03.728227] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.588 [2024-04-24 22:15:03.728262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.588 [2024-04-24 22:15:03.728281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.588 [2024-04-24 22:15:03.740657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.588 [2024-04-24 22:15:03.740691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.588 [2024-04-24 22:15:03.740710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.588 [2024-04-24 22:15:03.755654] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.588 [2024-04-24 22:15:03.755688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.588 [2024-04-24 22:15:03.755707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.588 [2024-04-24 22:15:03.770181] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.588 [2024-04-24 22:15:03.770215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.588 [2024-04-24 22:15:03.770234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.588 [2024-04-24 22:15:03.783312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.588 [2024-04-24 22:15:03.783346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.588 [2024-04-24 22:15:03.783365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.588 [2024-04-24 22:15:03.798164] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.588 [2024-04-24 22:15:03.798200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.588 [2024-04-24 22:15:03.798220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.588 [2024-04-24 22:15:03.813706] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.588 [2024-04-24 22:15:03.813740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.588 [2024-04-24 22:15:03.813759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.588 [2024-04-24 22:15:03.827494] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.588 [2024-04-24 22:15:03.827528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.588 [2024-04-24 22:15:03.827547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.588 [2024-04-24 22:15:03.839160] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.588 [2024-04-24 22:15:03.839193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.588 [2024-04-24 22:15:03.839212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.846 [2024-04-24 22:15:03.854964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.846 [2024-04-24 22:15:03.854999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.846 [2024-04-24 22:15:03.855018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.846 [2024-04-24 22:15:03.868893] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.846 [2024-04-24 22:15:03.868926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.846 [2024-04-24 22:15:03.868945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.846 [2024-04-24 22:15:03.881460] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.846 [2024-04-24 22:15:03.881495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.846 [2024-04-24 22:15:03.881514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.846 [2024-04-24 22:15:03.895110] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.846 [2024-04-24 22:15:03.895143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.846 [2024-04-24 22:15:03.895161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.846 [2024-04-24 22:15:03.909888] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.846 [2024-04-24 22:15:03.909921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.846 [2024-04-24 22:15:03.909939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.846 [2024-04-24 22:15:03.922190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.846 [2024-04-24 22:15:03.922223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.846 [2024-04-24 22:15:03.922243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.846 [2024-04-24 22:15:03.936547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.846 [2024-04-24 22:15:03.936580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.846 [2024-04-24 22:15:03.936599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.846 [2024-04-24 22:15:03.951218] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.846 [2024-04-24 22:15:03.951256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.846 [2024-04-24 22:15:03.951276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.846 [2024-04-24 22:15:03.964829] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.846 [2024-04-24 22:15:03.964863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.846 [2024-04-24 22:15:03.964881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.846 [2024-04-24 22:15:03.982070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.846 [2024-04-24 22:15:03.982104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.846 [2024-04-24 22:15:03.982123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.846 [2024-04-24 22:15:03.996928] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.846 [2024-04-24 22:15:03.996961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.846 [2024-04-24 22:15:03.996980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.846 [2024-04-24 22:15:04.009881] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.846 [2024-04-24 22:15:04.009914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.846 [2024-04-24 22:15:04.009933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.846 [2024-04-24 22:15:04.025635] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.846 [2024-04-24 22:15:04.025668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.846 [2024-04-24 22:15:04.025688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.846 [2024-04-24 22:15:04.040127] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.846 [2024-04-24 22:15:04.040160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.846 [2024-04-24 22:15:04.040179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.846 [2024-04-24 22:15:04.052419] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.846 [2024-04-24 22:15:04.052452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.846 [2024-04-24 22:15:04.052471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.846 [2024-04-24 22:15:04.067578] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.846 [2024-04-24 22:15:04.067611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.846 [2024-04-24 22:15:04.067630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.846 [2024-04-24 22:15:04.083967] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.846 [2024-04-24 22:15:04.084001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.846 [2024-04-24 22:15:04.084020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.846 [2024-04-24 22:15:04.096439] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:21.846 [2024-04-24 22:15:04.096473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.846 [2024-04-24 22:15:04.096491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.104 [2024-04-24 22:15:04.111933] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.104 [2024-04-24 22:15:04.111973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.104 [2024-04-24 22:15:04.111992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.104 [2024-04-24 22:15:04.125501] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.104 [2024-04-24 22:15:04.125537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.104 [2024-04-24 22:15:04.125557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.104 [2024-04-24 22:15:04.138947] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.104 [2024-04-24 22:15:04.138981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.104 [2024-04-24 22:15:04.139000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.104 [2024-04-24 22:15:04.151804] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.104 [2024-04-24 22:15:04.151838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.104 [2024-04-24 22:15:04.151856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.104 [2024-04-24 22:15:04.167487] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.104 [2024-04-24 22:15:04.167521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.104 [2024-04-24 22:15:04.167540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.104 [2024-04-24 22:15:04.181031] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.104 [2024-04-24 22:15:04.181065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.104 [2024-04-24 22:15:04.181084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.104 [2024-04-24 22:15:04.193578] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.104 [2024-04-24 22:15:04.193611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.104 [2024-04-24 22:15:04.193639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.104 [2024-04-24 22:15:04.207258] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.104 [2024-04-24 22:15:04.207292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.104 [2024-04-24 22:15:04.207311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.104 [2024-04-24 22:15:04.221858] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.104 [2024-04-24 22:15:04.221891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.104 [2024-04-24 22:15:04.221909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.104 [2024-04-24 22:15:04.239658] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.104 [2024-04-24 22:15:04.239693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.104 [2024-04-24 22:15:04.239712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.104 [2024-04-24 22:15:04.257276] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.104 [2024-04-24 22:15:04.257310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.104 [2024-04-24 22:15:04.257329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.104 [2024-04-24 22:15:04.269004] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.104 [2024-04-24 22:15:04.269038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.104 [2024-04-24 22:15:04.269057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.104 [2024-04-24 22:15:04.285002] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.104 [2024-04-24 22:15:04.285037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.104 [2024-04-24 22:15:04.285055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.104 [2024-04-24 22:15:04.302822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.104 [2024-04-24 22:15:04.302856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.104 [2024-04-24 22:15:04.302875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.104 [2024-04-24 22:15:04.314622] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.105 [2024-04-24 22:15:04.314656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.105 [2024-04-24 22:15:04.314674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.105 [2024-04-24 22:15:04.328866] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.105 [2024-04-24 22:15:04.328906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.105 [2024-04-24 22:15:04.328925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.105 [2024-04-24 22:15:04.345144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.105 [2024-04-24 22:15:04.345178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.105 [2024-04-24 22:15:04.345197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.105 [2024-04-24 22:15:04.357086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.105 [2024-04-24 22:15:04.357118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.105 [2024-04-24 22:15:04.357137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.362 [2024-04-24 22:15:04.374362] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.362 [2024-04-24 22:15:04.374406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.362 [2024-04-24 22:15:04.374428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.362 [2024-04-24 22:15:04.388880] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.362 [2024-04-24 22:15:04.388914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.362 [2024-04-24 22:15:04.388933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.362 [2024-04-24 22:15:04.400841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.362 [2024-04-24 22:15:04.400877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.363 [2024-04-24 22:15:04.400898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.363 [2024-04-24 22:15:04.417375] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.363 [2024-04-24 22:15:04.417418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.363 [2024-04-24 22:15:04.417439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.363 [2024-04-24 22:15:04.429223] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.363 [2024-04-24 22:15:04.429257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.363 [2024-04-24 22:15:04.429277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.363 [2024-04-24 22:15:04.444320] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.363 [2024-04-24 22:15:04.444354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.363 [2024-04-24 22:15:04.444373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.363 [2024-04-24 22:15:04.457730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.363 [2024-04-24 22:15:04.457764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.363 [2024-04-24 22:15:04.457782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.363 [2024-04-24 22:15:04.472330] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.363 [2024-04-24 22:15:04.472365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.363 [2024-04-24 22:15:04.472384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.363 [2024-04-24 22:15:04.486950] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.363 [2024-04-24 22:15:04.486985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.363 [2024-04-24 22:15:04.487013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.363 [2024-04-24 22:15:04.499780] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.363 [2024-04-24 22:15:04.499814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.363 [2024-04-24 22:15:04.499833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.363 [2024-04-24 22:15:04.515001] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.363 [2024-04-24 22:15:04.515036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.363 [2024-04-24 22:15:04.515054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.363 [2024-04-24 22:15:04.529741] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.363 [2024-04-24 22:15:04.529775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.363 [2024-04-24 22:15:04.529794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.363 [2024-04-24 22:15:04.541665] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.363 [2024-04-24 22:15:04.541699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.363 [2024-04-24 22:15:04.541718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.363 [2024-04-24 22:15:04.558383] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.363 [2024-04-24 22:15:04.558424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.363 [2024-04-24 22:15:04.558444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.363 [2024-04-24 22:15:04.574529] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.363 [2024-04-24 22:15:04.574562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.363 [2024-04-24 22:15:04.574588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.363 [2024-04-24 22:15:04.587231] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.363 [2024-04-24 22:15:04.587266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.363 [2024-04-24 22:15:04.587284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.363 [2024-04-24 22:15:04.604094] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.363 [2024-04-24 22:15:04.604129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.363 [2024-04-24 22:15:04.604148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.622 [2024-04-24 22:15:04.622052] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.622 [2024-04-24 22:15:04.622086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.622 [2024-04-24 22:15:04.622105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.622 [2024-04-24 22:15:04.634080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.622 [2024-04-24 22:15:04.634114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.622 [2024-04-24 22:15:04.634133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.622 [2024-04-24 22:15:04.650529] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.622 [2024-04-24 22:15:04.650563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.622 [2024-04-24 22:15:04.650582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.622 [2024-04-24 22:15:04.665261] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.622 [2024-04-24 22:15:04.665295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.622 [2024-04-24 22:15:04.665314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.622 [2024-04-24 22:15:04.680274] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.622 [2024-04-24 22:15:04.680308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.622 [2024-04-24 22:15:04.680327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.622 [2024-04-24 22:15:04.693732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.622 [2024-04-24 22:15:04.693766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.622 [2024-04-24 22:15:04.693784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.622 [2024-04-24 22:15:04.711769] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.622 [2024-04-24 22:15:04.711808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.622 [2024-04-24 22:15:04.711828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.622 [2024-04-24 22:15:04.724032] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.622 [2024-04-24 22:15:04.724066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.622 [2024-04-24 22:15:04.724085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.622 [2024-04-24 22:15:04.739094] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.622 [2024-04-24 22:15:04.739128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.622 [2024-04-24 22:15:04.739147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.622 [2024-04-24 22:15:04.756708] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.622 [2024-04-24 22:15:04.756741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.622 [2024-04-24 22:15:04.756760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.622 [2024-04-24 22:15:04.771029] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.622 [2024-04-24 22:15:04.771062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.622 [2024-04-24 22:15:04.771081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.622 [2024-04-24 22:15:04.782674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.622 [2024-04-24 22:15:04.782708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.622 [2024-04-24 22:15:04.782727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.622 [2024-04-24 22:15:04.798557] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.622 [2024-04-24 22:15:04.798591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.622 [2024-04-24 22:15:04.798610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.622 [2024-04-24 22:15:04.817621] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.622 [2024-04-24 22:15:04.817659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.622 [2024-04-24 22:15:04.817678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.622 [2024-04-24 22:15:04.832752] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.622 [2024-04-24 22:15:04.832786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.622 [2024-04-24 22:15:04.832804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.622 [2024-04-24 22:15:04.845077] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.622 [2024-04-24 22:15:04.845111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.622 [2024-04-24 22:15:04.845129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.622 [2024-04-24 22:15:04.862509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.622 [2024-04-24 22:15:04.862542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.622 [2024-04-24 22:15:04.862561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.880 [2024-04-24 22:15:04.877742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.880 [2024-04-24 22:15:04.877775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.880 [2024-04-24 22:15:04.877793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.881 [2024-04-24 22:15:04.891723] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.881 [2024-04-24 22:15:04.891756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.881 [2024-04-24 22:15:04.891774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.881 [2024-04-24 22:15:04.906679] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.881 [2024-04-24 22:15:04.906712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.881 [2024-04-24 22:15:04.906731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.881 [2024-04-24 22:15:04.918003] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.881 [2024-04-24 22:15:04.918037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.881 [2024-04-24 22:15:04.918055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.881 [2024-04-24 22:15:04.934207] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.881 [2024-04-24 22:15:04.934243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.881 [2024-04-24 22:15:04.934262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.881 [2024-04-24 22:15:04.951550] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.881 [2024-04-24 22:15:04.951582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.881 [2024-04-24 22:15:04.951601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.881 [2024-04-24 22:15:04.969749] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.881 [2024-04-24 22:15:04.969789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.881 [2024-04-24 22:15:04.969809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.881 [2024-04-24 22:15:04.984346] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.881 [2024-04-24 22:15:04.984380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.881 [2024-04-24 22:15:04.984407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.881 [2024-04-24 22:15:04.998720] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.881 [2024-04-24 22:15:04.998754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.881 [2024-04-24 22:15:04.998772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.881 [2024-04-24 22:15:05.011192] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.881 [2024-04-24 22:15:05.011227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.881 [2024-04-24 22:15:05.011246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.881 [2024-04-24 22:15:05.025454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.881 [2024-04-24 22:15:05.025488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.881 [2024-04-24 22:15:05.025506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.881 [2024-04-24 22:15:05.037895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.881 [2024-04-24 22:15:05.037929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.881 [2024-04-24 22:15:05.037948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.881 [2024-04-24 22:15:05.053791] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.881 [2024-04-24 22:15:05.053825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.881 [2024-04-24 22:15:05.053844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.881 [2024-04-24 22:15:05.068604] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.881 [2024-04-24 22:15:05.068637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.881 [2024-04-24 22:15:05.068655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.881 [2024-04-24 22:15:05.081583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.881 [2024-04-24 22:15:05.081616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.881 [2024-04-24 22:15:05.081634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.881 [2024-04-24 22:15:05.094042] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.881 [2024-04-24 22:15:05.094076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.881 [2024-04-24 22:15:05.094095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.881 [2024-04-24 22:15:05.108630] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.881 [2024-04-24 22:15:05.108664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.881 [2024-04-24 22:15:05.108682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.881 [2024-04-24 22:15:05.123385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:22.881 [2024-04-24 22:15:05.123427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.881 [2024-04-24 22:15:05.123447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.139 [2024-04-24 22:15:05.138071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.139 [2024-04-24 22:15:05.138106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.139 [2024-04-24 22:15:05.138125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.139 [2024-04-24 22:15:05.150437] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.139 [2024-04-24 22:15:05.150475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.139 [2024-04-24 22:15:05.150494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.139 [2024-04-24 22:15:05.164874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.139 [2024-04-24 22:15:05.164908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.139 [2024-04-24 22:15:05.164927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.139 [2024-04-24 22:15:05.179108] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.139 [2024-04-24 22:15:05.179141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.139 [2024-04-24 22:15:05.179159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.139 [2024-04-24 22:15:05.191782] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.139 [2024-04-24 22:15:05.191816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.139 [2024-04-24 22:15:05.191834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.139 [2024-04-24 22:15:05.206480] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.139 [2024-04-24 22:15:05.206513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.139 [2024-04-24 22:15:05.206538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.139 [2024-04-24 22:15:05.222707] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.139 [2024-04-24 22:15:05.222741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.139 [2024-04-24 22:15:05.222759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.139 [2024-04-24 22:15:05.235255] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.139 [2024-04-24 22:15:05.235289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.139 [2024-04-24 22:15:05.235308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.139 [2024-04-24 22:15:05.248272] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.139 [2024-04-24 22:15:05.248306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.139 [2024-04-24 22:15:05.248325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.139 [2024-04-24 22:15:05.261902] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.139 [2024-04-24 22:15:05.261935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.139 [2024-04-24 22:15:05.261954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.139 [2024-04-24 22:15:05.277647] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.139 [2024-04-24 22:15:05.277682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.139 [2024-04-24 22:15:05.277700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.140 [2024-04-24 22:15:05.289202] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.140 [2024-04-24 22:15:05.289235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.140 [2024-04-24 22:15:05.289254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.140 [2024-04-24 22:15:05.307064] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.140 [2024-04-24 22:15:05.307098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.140 [2024-04-24 22:15:05.307128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.140 [2024-04-24 22:15:05.318823] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.140 [2024-04-24 22:15:05.318856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.140 [2024-04-24 22:15:05.318877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.140 [2024-04-24 22:15:05.333302] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.140 [2024-04-24 22:15:05.333342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.140 [2024-04-24 22:15:05.333362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.140 [2024-04-24 22:15:05.350895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.140 [2024-04-24 22:15:05.350930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.140 [2024-04-24 22:15:05.350949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.140 [2024-04-24 22:15:05.369227] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.140 [2024-04-24 22:15:05.369261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.140 [2024-04-24 22:15:05.369280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.140 [2024-04-24 22:15:05.385236] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.140 [2024-04-24 22:15:05.385269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.140 [2024-04-24 22:15:05.385288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.398 [2024-04-24 22:15:05.398115] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.398 [2024-04-24 22:15:05.398150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.398 [2024-04-24 22:15:05.398168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.398 [2024-04-24 22:15:05.412279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.398 [2024-04-24 22:15:05.412314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.398 [2024-04-24 22:15:05.412333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.398 [2024-04-24 22:15:05.427716] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.398 [2024-04-24 22:15:05.427753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.398 [2024-04-24 22:15:05.427772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.398 [2024-04-24 22:15:05.441233] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.398 [2024-04-24 22:15:05.441266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.398 [2024-04-24 22:15:05.441285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.398 [2024-04-24 22:15:05.456688] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.398 [2024-04-24 22:15:05.456722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.398 [2024-04-24 22:15:05.456741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.398 [2024-04-24 22:15:05.469137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.398 [2024-04-24 22:15:05.469171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.398 [2024-04-24 22:15:05.469189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.398 [2024-04-24 22:15:05.485669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.398 [2024-04-24 22:15:05.485703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.398 [2024-04-24 22:15:05.485722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.398 [2024-04-24 22:15:05.500703] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.398 [2024-04-24 22:15:05.500737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.398 [2024-04-24 22:15:05.500756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.398 [2024-04-24 22:15:05.512598] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.398 [2024-04-24 22:15:05.512632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.398 [2024-04-24 22:15:05.512651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.398 [2024-04-24 22:15:05.526659] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.398 [2024-04-24 22:15:05.526693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.398 [2024-04-24 22:15:05.526712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.398 [2024-04-24 22:15:05.544810] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.398 [2024-04-24 22:15:05.544844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.398 [2024-04-24 22:15:05.544863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.398 [2024-04-24 22:15:05.561437] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.398 [2024-04-24 22:15:05.561471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.398 [2024-04-24 22:15:05.561490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.398 [2024-04-24 22:15:05.574719] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b5d30) 00:23:23.398 [2024-04-24 22:15:05.574753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.398 [2024-04-24 22:15:05.574772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.398 00:23:23.398 Latency(us) 00:23:23.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.398 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:23.398 nvme0n1 : 2.01 17478.02 68.27 0.00 0.00 7314.86 3640.89 26020.22 00:23:23.398 =================================================================================================================== 00:23:23.398 Total : 17478.02 68.27 0.00 0.00 7314.86 3640.89 26020.22 00:23:23.398 0 00:23:23.398 22:15:05 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:23.398 22:15:05 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:23.398 22:15:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:23.398 22:15:05 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:23.398 | .driver_specific 00:23:23.398 | .nvme_error 00:23:23.398 | .status_code 00:23:23.398 | .command_transient_transport_error' 00:23:23.963 22:15:05 -- host/digest.sh@71 -- # (( 137 > 0 )) 00:23:23.963 22:15:05 -- host/digest.sh@73 -- # killprocess 4030393 00:23:23.963 22:15:05 -- common/autotest_common.sh@936 -- # '[' -z 4030393 ']' 00:23:23.963 22:15:05 -- common/autotest_common.sh@940 -- # kill -0 4030393 00:23:23.963 22:15:05 -- common/autotest_common.sh@941 -- # uname 00:23:23.963 22:15:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:23.963 22:15:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4030393 00:23:23.964 22:15:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:23.964 22:15:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:23.964 22:15:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4030393' 00:23:23.964 killing process with pid 4030393 00:23:23.964 22:15:05 -- common/autotest_common.sh@955 -- # kill 4030393 00:23:23.964 Received shutdown signal, test time was about 2.000000 seconds 00:23:23.964 00:23:23.964 Latency(us) 00:23:23.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.964 =================================================================================================================== 00:23:23.964 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:23.964 22:15:05 -- common/autotest_common.sh@960 -- # wait 4030393 00:23:24.221 22:15:06 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:23:24.221 22:15:06 -- host/digest.sh@54 -- # local rw bs qd 00:23:24.221 22:15:06 -- host/digest.sh@56 -- # rw=randread 00:23:24.221 22:15:06 -- host/digest.sh@56 -- # bs=131072 00:23:24.221 22:15:06 -- host/digest.sh@56 -- # qd=16 00:23:24.221 22:15:06 -- host/digest.sh@58 -- # bperfpid=4030925 00:23:24.221 22:15:06 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:23:24.221 22:15:06 -- host/digest.sh@60 -- # waitforlisten 4030925 /var/tmp/bperf.sock 00:23:24.221 22:15:06 -- common/autotest_common.sh@817 -- # '[' -z 4030925 ']' 00:23:24.221 22:15:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:24.221 22:15:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:24.221 22:15:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:24.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:24.221 22:15:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:24.221 22:15:06 -- common/autotest_common.sh@10 -- # set +x 00:23:24.221 [2024-04-24 22:15:06.297324] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:23:24.221 [2024-04-24 22:15:06.297432] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4030925 ] 00:23:24.221 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:24.222 Zero copy mechanism will not be used. 00:23:24.222 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.222 [2024-04-24 22:15:06.372611] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.479 [2024-04-24 22:15:06.492300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.479 22:15:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:24.479 22:15:06 -- common/autotest_common.sh@850 -- # return 0 00:23:24.479 22:15:06 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:24.479 22:15:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:25.044 22:15:07 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:25.044 22:15:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.044 22:15:07 -- common/autotest_common.sh@10 -- # set +x 00:23:25.044 22:15:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.044 22:15:07 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:25.044 22:15:07 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:25.305 nvme0n1 00:23:25.305 22:15:07 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:25.305 22:15:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.305 22:15:07 -- common/autotest_common.sh@10 -- # set +x 00:23:25.305 22:15:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.305 22:15:07 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:25.305 22:15:07 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:25.564 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:25.564 Zero copy mechanism will not be used. 00:23:25.564 Running I/O for 2 seconds... 00:23:25.564 [2024-04-24 22:15:07.683896] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.564 [2024-04-24 22:15:07.683953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.564 [2024-04-24 22:15:07.683976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.564 [2024-04-24 22:15:07.694192] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.564 [2024-04-24 22:15:07.694235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.564 [2024-04-24 22:15:07.694255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.564 [2024-04-24 22:15:07.704413] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.564 [2024-04-24 22:15:07.704453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.564 [2024-04-24 22:15:07.704473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.564 [2024-04-24 22:15:07.714066] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.564 [2024-04-24 22:15:07.714101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.564 [2024-04-24 22:15:07.714119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.564 [2024-04-24 22:15:07.723601] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.564 [2024-04-24 22:15:07.723635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.564 [2024-04-24 22:15:07.723653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.564 [2024-04-24 22:15:07.733770] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.564 [2024-04-24 22:15:07.733812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.564 [2024-04-24 22:15:07.733831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.564 [2024-04-24 22:15:07.743860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.564 [2024-04-24 22:15:07.743904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.564 [2024-04-24 22:15:07.743922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.564 [2024-04-24 22:15:07.754092] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.564 [2024-04-24 22:15:07.754125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.564 [2024-04-24 22:15:07.754143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.564 [2024-04-24 22:15:07.764773] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.564 [2024-04-24 22:15:07.764808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.564 [2024-04-24 22:15:07.764827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.564 [2024-04-24 22:15:07.775120] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.564 [2024-04-24 22:15:07.775154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.564 [2024-04-24 22:15:07.775173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.564 [2024-04-24 22:15:07.785117] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.564 [2024-04-24 22:15:07.785150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.564 [2024-04-24 22:15:07.785168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.564 [2024-04-24 22:15:07.794959] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.564 [2024-04-24 22:15:07.794992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.564 [2024-04-24 22:15:07.795011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.564 [2024-04-24 22:15:07.805582] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.564 [2024-04-24 22:15:07.805616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.564 [2024-04-24 22:15:07.805635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.564 [2024-04-24 22:15:07.815584] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.564 [2024-04-24 22:15:07.815619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.564 [2024-04-24 22:15:07.815639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.824 [2024-04-24 22:15:07.825942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.824 [2024-04-24 22:15:07.825977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.824 [2024-04-24 22:15:07.825996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.824 [2024-04-24 22:15:07.835566] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.824 [2024-04-24 22:15:07.835599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.824 [2024-04-24 22:15:07.835618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.824 [2024-04-24 22:15:07.844544] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.824 [2024-04-24 22:15:07.844579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.824 [2024-04-24 22:15:07.844599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.824 [2024-04-24 22:15:07.854321] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.824 [2024-04-24 22:15:07.854355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.824 [2024-04-24 22:15:07.854373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.824 [2024-04-24 22:15:07.863869] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.824 [2024-04-24 22:15:07.863903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.824 [2024-04-24 22:15:07.863922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.824 [2024-04-24 22:15:07.873900] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.824 [2024-04-24 22:15:07.873935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.824 [2024-04-24 22:15:07.873954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.824 [2024-04-24 22:15:07.883688] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.824 [2024-04-24 22:15:07.883722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.824 [2024-04-24 22:15:07.883741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.824 [2024-04-24 22:15:07.893062] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.824 [2024-04-24 22:15:07.893095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.824 [2024-04-24 22:15:07.893113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.824 [2024-04-24 22:15:07.902491] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.824 [2024-04-24 22:15:07.902534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.824 [2024-04-24 22:15:07.902561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.824 [2024-04-24 22:15:07.912129] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.824 [2024-04-24 22:15:07.912162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.824 [2024-04-24 22:15:07.912181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.824 [2024-04-24 22:15:07.921620] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.824 [2024-04-24 22:15:07.921653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.824 [2024-04-24 22:15:07.921672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.824 [2024-04-24 22:15:07.931106] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.824 [2024-04-24 22:15:07.931140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.824 [2024-04-24 22:15:07.931157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.824 [2024-04-24 22:15:07.940672] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.824 [2024-04-24 22:15:07.940705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.824 [2024-04-24 22:15:07.940724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.824 [2024-04-24 22:15:07.950185] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.824 [2024-04-24 22:15:07.950219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.824 [2024-04-24 22:15:07.950237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.824 [2024-04-24 22:15:07.959742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.824 [2024-04-24 22:15:07.959774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.824 [2024-04-24 22:15:07.959794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.824 [2024-04-24 22:15:07.969305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.824 [2024-04-24 22:15:07.969338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.824 [2024-04-24 22:15:07.969356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.824 [2024-04-24 22:15:07.979307] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.824 [2024-04-24 22:15:07.979341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.824 [2024-04-24 22:15:07.979360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.824 [2024-04-24 22:15:07.989016] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.825 [2024-04-24 22:15:07.989050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.825 [2024-04-24 22:15:07.989069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.825 [2024-04-24 22:15:07.998570] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.825 [2024-04-24 22:15:07.998602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.825 [2024-04-24 22:15:07.998621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.825 [2024-04-24 22:15:08.008286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.825 [2024-04-24 22:15:08.008319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.825 [2024-04-24 22:15:08.008338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.825 [2024-04-24 22:15:08.017853] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.825 [2024-04-24 22:15:08.017886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.825 [2024-04-24 22:15:08.017905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.825 [2024-04-24 22:15:08.027413] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.825 [2024-04-24 22:15:08.027446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.825 [2024-04-24 22:15:08.027464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.825 [2024-04-24 22:15:08.037364] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.825 [2024-04-24 22:15:08.037409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.825 [2024-04-24 22:15:08.037430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:25.825 [2024-04-24 22:15:08.047169] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.825 [2024-04-24 22:15:08.047204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.825 [2024-04-24 22:15:08.047223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:25.825 [2024-04-24 22:15:08.056758] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.825 [2024-04-24 22:15:08.056792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.825 [2024-04-24 22:15:08.056811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:25.825 [2024-04-24 22:15:08.066194] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.825 [2024-04-24 22:15:08.066228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.825 [2024-04-24 22:15:08.066253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.825 [2024-04-24 22:15:08.075701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:25.825 [2024-04-24 22:15:08.075736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:25.825 [2024-04-24 22:15:08.075755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.084 [2024-04-24 22:15:08.085169] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.084 [2024-04-24 22:15:08.085203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.084 [2024-04-24 22:15:08.085221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.084 [2024-04-24 22:15:08.094617] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.085 [2024-04-24 22:15:08.094660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.085 [2024-04-24 22:15:08.094679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.085 [2024-04-24 22:15:08.103878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.085 [2024-04-24 22:15:08.103913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.085 [2024-04-24 22:15:08.103932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.085 [2024-04-24 22:15:08.113374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.085 [2024-04-24 22:15:08.113427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.085 [2024-04-24 22:15:08.113456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.085 [2024-04-24 22:15:08.122632] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.085 [2024-04-24 22:15:08.122665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.085 [2024-04-24 22:15:08.122683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.085 [2024-04-24 22:15:08.131893] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.085 [2024-04-24 22:15:08.131928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.085 [2024-04-24 22:15:08.131951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.085 [2024-04-24 22:15:08.140941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.085 [2024-04-24 22:15:08.140984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.085 [2024-04-24 22:15:08.141004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.085 [2024-04-24 22:15:08.150715] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.085 [2024-04-24 22:15:08.150767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.085 [2024-04-24 22:15:08.150787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.085 [2024-04-24 22:15:08.161555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.085 [2024-04-24 22:15:08.161590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.085 [2024-04-24 22:15:08.161610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.085 [2024-04-24 22:15:08.172086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.085 [2024-04-24 22:15:08.172131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.085 [2024-04-24 22:15:08.172150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.085 [2024-04-24 22:15:08.182713] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.085 [2024-04-24 22:15:08.182748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.085 [2024-04-24 22:15:08.182767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.085 [2024-04-24 22:15:08.192947] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.085 [2024-04-24 22:15:08.192987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.085 [2024-04-24 22:15:08.193006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.085 [2024-04-24 22:15:08.202382] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.085 [2024-04-24 22:15:08.202436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.085 [2024-04-24 22:15:08.202456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.085 [2024-04-24 22:15:08.213350] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.085 [2024-04-24 22:15:08.213386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.085 [2024-04-24 22:15:08.213412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.085 [2024-04-24 22:15:08.223655] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.085 [2024-04-24 22:15:08.223700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.085 [2024-04-24 22:15:08.223720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.085 [2024-04-24 22:15:08.233842] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.085 [2024-04-24 22:15:08.233878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.085 [2024-04-24 22:15:08.233901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.085 [2024-04-24 22:15:08.245648] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.085 [2024-04-24 22:15:08.245693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.085 [2024-04-24 22:15:08.245713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.085 [2024-04-24 22:15:08.258130] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.085 [2024-04-24 22:15:08.258171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.085 [2024-04-24 22:15:08.258191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.085 [2024-04-24 22:15:08.268951] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.085 [2024-04-24 22:15:08.268985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.085 [2024-04-24 22:15:08.269005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.085 [2024-04-24 22:15:08.280366] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.085 [2024-04-24 22:15:08.280412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.085 [2024-04-24 22:15:08.280443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.085 [2024-04-24 22:15:08.290088] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.085 [2024-04-24 22:15:08.290122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.085 [2024-04-24 22:15:08.290142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.085 [2024-04-24 22:15:08.300600] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.085 [2024-04-24 22:15:08.300635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.085 [2024-04-24 22:15:08.300655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.085 [2024-04-24 22:15:08.310159] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.085 [2024-04-24 22:15:08.310195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.085 [2024-04-24 22:15:08.310214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.085 [2024-04-24 22:15:08.320222] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.085 [2024-04-24 22:15:08.320257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.085 [2024-04-24 22:15:08.320276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.085 [2024-04-24 22:15:08.330344] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.085 [2024-04-24 22:15:08.330378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.085 [2024-04-24 22:15:08.330416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.345 [2024-04-24 22:15:08.340431] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.345 [2024-04-24 22:15:08.340480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.345 [2024-04-24 22:15:08.340500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.345 [2024-04-24 22:15:08.350878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.345 [2024-04-24 22:15:08.350919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.345 [2024-04-24 22:15:08.350938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.345 [2024-04-24 22:15:08.361415] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.345 [2024-04-24 22:15:08.361451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.345 [2024-04-24 22:15:08.361470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.346 [2024-04-24 22:15:08.371690] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.346 [2024-04-24 22:15:08.371725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.346 [2024-04-24 22:15:08.371744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.346 [2024-04-24 22:15:08.382159] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.346 [2024-04-24 22:15:08.382194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.346 [2024-04-24 22:15:08.382213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.346 [2024-04-24 22:15:08.393644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.346 [2024-04-24 22:15:08.393678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.346 [2024-04-24 22:15:08.393697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.346 [2024-04-24 22:15:08.402928] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.346 [2024-04-24 22:15:08.402964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.346 [2024-04-24 22:15:08.402983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.346 [2024-04-24 22:15:08.412746] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.346 [2024-04-24 22:15:08.412789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.346 [2024-04-24 22:15:08.412808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.346 [2024-04-24 22:15:08.423266] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.346 [2024-04-24 22:15:08.423309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.346 [2024-04-24 22:15:08.423329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.346 [2024-04-24 22:15:08.434391] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.346 [2024-04-24 22:15:08.434434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.346 [2024-04-24 22:15:08.434455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.346 [2024-04-24 22:15:08.444932] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.346 [2024-04-24 22:15:08.444967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.346 [2024-04-24 22:15:08.444987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.346 [2024-04-24 22:15:08.455240] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.346 [2024-04-24 22:15:08.455275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.346 [2024-04-24 22:15:08.455294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.346 [2024-04-24 22:15:08.465383] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.346 [2024-04-24 22:15:08.465426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.346 [2024-04-24 22:15:08.465446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.346 [2024-04-24 22:15:08.475705] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.346 [2024-04-24 22:15:08.475750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.346 [2024-04-24 22:15:08.475769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.346 [2024-04-24 22:15:08.485968] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.346 [2024-04-24 22:15:08.486013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.346 [2024-04-24 22:15:08.486032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.346 [2024-04-24 22:15:08.496621] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.346 [2024-04-24 22:15:08.496656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.346 [2024-04-24 22:15:08.496676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.346 [2024-04-24 22:15:08.507587] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.346 [2024-04-24 22:15:08.507622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.346 [2024-04-24 22:15:08.507642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.346 [2024-04-24 22:15:08.518032] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.346 [2024-04-24 22:15:08.518066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.346 [2024-04-24 22:15:08.518086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.346 [2024-04-24 22:15:08.528182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.346 [2024-04-24 22:15:08.528216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.346 [2024-04-24 22:15:08.528235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.346 [2024-04-24 22:15:08.538490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.346 [2024-04-24 22:15:08.538525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.346 [2024-04-24 22:15:08.538543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.346 [2024-04-24 22:15:08.549516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.346 [2024-04-24 22:15:08.549562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.346 [2024-04-24 22:15:08.549581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.346 [2024-04-24 22:15:08.560317] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.346 [2024-04-24 22:15:08.560363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.346 [2024-04-24 22:15:08.560382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.346 [2024-04-24 22:15:08.571040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.346 [2024-04-24 22:15:08.571074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.346 [2024-04-24 22:15:08.571092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.346 [2024-04-24 22:15:08.581256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.346 [2024-04-24 22:15:08.581301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.346 [2024-04-24 22:15:08.581320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.346 [2024-04-24 22:15:08.592429] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.346 [2024-04-24 22:15:08.592464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.346 [2024-04-24 22:15:08.592483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.606 [2024-04-24 22:15:08.601826] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.606 [2024-04-24 22:15:08.601861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.606 [2024-04-24 22:15:08.601887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.606 [2024-04-24 22:15:08.612468] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.606 [2024-04-24 22:15:08.612501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.606 [2024-04-24 22:15:08.612519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.606 [2024-04-24 22:15:08.623185] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.606 [2024-04-24 22:15:08.623219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.606 [2024-04-24 22:15:08.623238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.606 [2024-04-24 22:15:08.633034] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.606 [2024-04-24 22:15:08.633068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.606 [2024-04-24 22:15:08.633088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.606 [2024-04-24 22:15:08.643165] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.606 [2024-04-24 22:15:08.643199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.606 [2024-04-24 22:15:08.643218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.606 [2024-04-24 22:15:08.653243] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.606 [2024-04-24 22:15:08.653277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.606 [2024-04-24 22:15:08.653295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.606 [2024-04-24 22:15:08.662100] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.606 [2024-04-24 22:15:08.662133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.606 [2024-04-24 22:15:08.662157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.606 [2024-04-24 22:15:08.672405] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.606 [2024-04-24 22:15:08.672443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.606 [2024-04-24 22:15:08.672462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.606 [2024-04-24 22:15:08.682701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.606 [2024-04-24 22:15:08.682735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.606 [2024-04-24 22:15:08.682753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.606 [2024-04-24 22:15:08.693435] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.606 [2024-04-24 22:15:08.693477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.606 [2024-04-24 22:15:08.693497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.606 [2024-04-24 22:15:08.704633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.606 [2024-04-24 22:15:08.704668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.606 [2024-04-24 22:15:08.704687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.606 [2024-04-24 22:15:08.715420] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.606 [2024-04-24 22:15:08.715454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.606 [2024-04-24 22:15:08.715473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.606 [2024-04-24 22:15:08.726877] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.606 [2024-04-24 22:15:08.726912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.607 [2024-04-24 22:15:08.726931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.607 [2024-04-24 22:15:08.737135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.607 [2024-04-24 22:15:08.737168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.607 [2024-04-24 22:15:08.737186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.607 [2024-04-24 22:15:08.748069] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.607 [2024-04-24 22:15:08.748104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.607 [2024-04-24 22:15:08.748123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.607 [2024-04-24 22:15:08.759315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.607 [2024-04-24 22:15:08.759350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.607 [2024-04-24 22:15:08.759369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.607 [2024-04-24 22:15:08.768422] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.607 [2024-04-24 22:15:08.768455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.607 [2024-04-24 22:15:08.768473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.607 [2024-04-24 22:15:08.778777] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.607 [2024-04-24 22:15:08.778810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.607 [2024-04-24 22:15:08.778829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.607 [2024-04-24 22:15:08.787724] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.607 [2024-04-24 22:15:08.787757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.607 [2024-04-24 22:15:08.787776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.607 [2024-04-24 22:15:08.798454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.607 [2024-04-24 22:15:08.798489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.607 [2024-04-24 22:15:08.798508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.607 [2024-04-24 22:15:08.809274] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.607 [2024-04-24 22:15:08.809308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.607 [2024-04-24 22:15:08.809327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.607 [2024-04-24 22:15:08.819458] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.607 [2024-04-24 22:15:08.819491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.607 [2024-04-24 22:15:08.819510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.607 [2024-04-24 22:15:08.829756] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.607 [2024-04-24 22:15:08.829788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.607 [2024-04-24 22:15:08.829806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.607 [2024-04-24 22:15:08.840165] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.607 [2024-04-24 22:15:08.840198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.607 [2024-04-24 22:15:08.840216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.607 [2024-04-24 22:15:08.850537] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.607 [2024-04-24 22:15:08.850569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.607 [2024-04-24 22:15:08.850588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.607 [2024-04-24 22:15:08.860945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.607 [2024-04-24 22:15:08.860978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.607 [2024-04-24 22:15:08.860996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.888 [2024-04-24 22:15:08.871175] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.888 [2024-04-24 22:15:08.871210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.888 [2024-04-24 22:15:08.871237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.888 [2024-04-24 22:15:08.881626] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.888 [2024-04-24 22:15:08.881661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.888 [2024-04-24 22:15:08.881679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.888 [2024-04-24 22:15:08.891613] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.888 [2024-04-24 22:15:08.891646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.888 [2024-04-24 22:15:08.891664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.888 [2024-04-24 22:15:08.900999] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.888 [2024-04-24 22:15:08.901033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.888 [2024-04-24 22:15:08.901052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.888 [2024-04-24 22:15:08.910710] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.888 [2024-04-24 22:15:08.910743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.888 [2024-04-24 22:15:08.910761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.889 [2024-04-24 22:15:08.921017] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.889 [2024-04-24 22:15:08.921051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.889 [2024-04-24 22:15:08.921070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.889 [2024-04-24 22:15:08.931057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.889 [2024-04-24 22:15:08.931091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.889 [2024-04-24 22:15:08.931110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.889 [2024-04-24 22:15:08.941843] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.889 [2024-04-24 22:15:08.941879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.889 [2024-04-24 22:15:08.941898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.889 [2024-04-24 22:15:08.953189] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.889 [2024-04-24 22:15:08.953225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.889 [2024-04-24 22:15:08.953244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.889 [2024-04-24 22:15:08.963724] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.889 [2024-04-24 22:15:08.963772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.889 [2024-04-24 22:15:08.963792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.889 [2024-04-24 22:15:08.974310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.889 [2024-04-24 22:15:08.974345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.889 [2024-04-24 22:15:08.974364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.889 [2024-04-24 22:15:08.985461] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.889 [2024-04-24 22:15:08.985497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.889 [2024-04-24 22:15:08.985516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.889 [2024-04-24 22:15:08.997283] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.889 [2024-04-24 22:15:08.997319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.889 [2024-04-24 22:15:08.997338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.889 [2024-04-24 22:15:09.007823] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.889 [2024-04-24 22:15:09.007858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.889 [2024-04-24 22:15:09.007878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.889 [2024-04-24 22:15:09.019229] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.889 [2024-04-24 22:15:09.019264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.889 [2024-04-24 22:15:09.019283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.889 [2024-04-24 22:15:09.030733] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.889 [2024-04-24 22:15:09.030768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.889 [2024-04-24 22:15:09.030787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.889 [2024-04-24 22:15:09.042295] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.889 [2024-04-24 22:15:09.042330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.889 [2024-04-24 22:15:09.042349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.889 [2024-04-24 22:15:09.053341] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.889 [2024-04-24 22:15:09.053375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.889 [2024-04-24 22:15:09.053404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.889 [2024-04-24 22:15:09.064124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.889 [2024-04-24 22:15:09.064158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.889 [2024-04-24 22:15:09.064176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.889 [2024-04-24 22:15:09.074692] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.889 [2024-04-24 22:15:09.074725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.889 [2024-04-24 22:15:09.074744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.889 [2024-04-24 22:15:09.085233] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.889 [2024-04-24 22:15:09.085267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.889 [2024-04-24 22:15:09.085285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:26.889 [2024-04-24 22:15:09.095858] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.889 [2024-04-24 22:15:09.095891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.889 [2024-04-24 22:15:09.095910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:26.889 [2024-04-24 22:15:09.106332] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.889 [2024-04-24 22:15:09.106365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.889 [2024-04-24 22:15:09.106383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.889 [2024-04-24 22:15:09.116738] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.889 [2024-04-24 22:15:09.116772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.889 [2024-04-24 22:15:09.116790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:26.889 [2024-04-24 22:15:09.127341] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:26.889 [2024-04-24 22:15:09.127377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.889 [2024-04-24 22:15:09.127403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.151 [2024-04-24 22:15:09.137477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.151 [2024-04-24 22:15:09.137511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.151 [2024-04-24 22:15:09.137530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.151 [2024-04-24 22:15:09.147321] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.151 [2024-04-24 22:15:09.147360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.151 [2024-04-24 22:15:09.147380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.151 [2024-04-24 22:15:09.157084] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.151 [2024-04-24 22:15:09.157118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.151 [2024-04-24 22:15:09.157137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.151 [2024-04-24 22:15:09.166714] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.151 [2024-04-24 22:15:09.166747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.151 [2024-04-24 22:15:09.166766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.151 [2024-04-24 22:15:09.176435] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.151 [2024-04-24 22:15:09.176468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.151 [2024-04-24 22:15:09.176486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.151 [2024-04-24 22:15:09.185972] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.151 [2024-04-24 22:15:09.186006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.151 [2024-04-24 22:15:09.186025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.151 [2024-04-24 22:15:09.195534] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.151 [2024-04-24 22:15:09.195566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.151 [2024-04-24 22:15:09.195585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.151 [2024-04-24 22:15:09.205041] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.151 [2024-04-24 22:15:09.205073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.151 [2024-04-24 22:15:09.205092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.151 [2024-04-24 22:15:09.214403] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.151 [2024-04-24 22:15:09.214436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.151 [2024-04-24 22:15:09.214454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.151 [2024-04-24 22:15:09.224701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.151 [2024-04-24 22:15:09.224735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.151 [2024-04-24 22:15:09.224753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.151 [2024-04-24 22:15:09.234799] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.151 [2024-04-24 22:15:09.234833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.151 [2024-04-24 22:15:09.234851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.151 [2024-04-24 22:15:09.244551] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.151 [2024-04-24 22:15:09.244584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.151 [2024-04-24 22:15:09.244602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.151 [2024-04-24 22:15:09.254284] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.151 [2024-04-24 22:15:09.254318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.151 [2024-04-24 22:15:09.254337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.151 [2024-04-24 22:15:09.263952] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.151 [2024-04-24 22:15:09.263985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.151 [2024-04-24 22:15:09.264003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.151 [2024-04-24 22:15:09.273907] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.151 [2024-04-24 22:15:09.273941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.151 [2024-04-24 22:15:09.273959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.151 [2024-04-24 22:15:09.283293] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.151 [2024-04-24 22:15:09.283326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.151 [2024-04-24 22:15:09.283344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.151 [2024-04-24 22:15:09.292709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.151 [2024-04-24 22:15:09.292742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.151 [2024-04-24 22:15:09.292760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.151 [2024-04-24 22:15:09.302515] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.151 [2024-04-24 22:15:09.302549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.151 [2024-04-24 22:15:09.302568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.151 [2024-04-24 22:15:09.312033] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.151 [2024-04-24 22:15:09.312067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.151 [2024-04-24 22:15:09.312092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.151 [2024-04-24 22:15:09.321505] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.151 [2024-04-24 22:15:09.321538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.151 [2024-04-24 22:15:09.321557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.151 [2024-04-24 22:15:09.331706] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.151 [2024-04-24 22:15:09.331740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.151 [2024-04-24 22:15:09.331759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.151 [2024-04-24 22:15:09.341251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.151 [2024-04-24 22:15:09.341284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.151 [2024-04-24 22:15:09.341303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.151 [2024-04-24 22:15:09.350870] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.151 [2024-04-24 22:15:09.350905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.151 [2024-04-24 22:15:09.350924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.151 [2024-04-24 22:15:09.360682] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.151 [2024-04-24 22:15:09.360717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.151 [2024-04-24 22:15:09.360735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.151 [2024-04-24 22:15:09.370359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.151 [2024-04-24 22:15:09.370404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.151 [2024-04-24 22:15:09.370426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.151 [2024-04-24 22:15:09.380048] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.151 [2024-04-24 22:15:09.380082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.151 [2024-04-24 22:15:09.380101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.151 [2024-04-24 22:15:09.390021] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.151 [2024-04-24 22:15:09.390056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.151 [2024-04-24 22:15:09.390075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.151 [2024-04-24 22:15:09.400332] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.151 [2024-04-24 22:15:09.400374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.151 [2024-04-24 22:15:09.400402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.410 [2024-04-24 22:15:09.410050] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.410 [2024-04-24 22:15:09.410084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.410 [2024-04-24 22:15:09.410103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.410 [2024-04-24 22:15:09.419890] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.410 [2024-04-24 22:15:09.419926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.410 [2024-04-24 22:15:09.419945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.410 [2024-04-24 22:15:09.430240] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.410 [2024-04-24 22:15:09.430274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.410 [2024-04-24 22:15:09.430293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.410 [2024-04-24 22:15:09.439852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.410 [2024-04-24 22:15:09.439887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.410 [2024-04-24 22:15:09.439905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.410 [2024-04-24 22:15:09.449487] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.410 [2024-04-24 22:15:09.449519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.410 [2024-04-24 22:15:09.449538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.410 [2024-04-24 22:15:09.459062] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.410 [2024-04-24 22:15:09.459095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.410 [2024-04-24 22:15:09.459114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.410 [2024-04-24 22:15:09.468541] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.410 [2024-04-24 22:15:09.468575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.410 [2024-04-24 22:15:09.468594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.410 [2024-04-24 22:15:09.478787] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.410 [2024-04-24 22:15:09.478821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.410 [2024-04-24 22:15:09.478839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.410 [2024-04-24 22:15:09.487820] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.410 [2024-04-24 22:15:09.487854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.410 [2024-04-24 22:15:09.487873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.410 [2024-04-24 22:15:09.497833] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.410 [2024-04-24 22:15:09.497868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.411 [2024-04-24 22:15:09.497887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.411 [2024-04-24 22:15:09.507193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.411 [2024-04-24 22:15:09.507235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.411 [2024-04-24 22:15:09.507255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.411 [2024-04-24 22:15:09.516521] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.411 [2024-04-24 22:15:09.516554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.411 [2024-04-24 22:15:09.516573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.411 [2024-04-24 22:15:09.526373] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.411 [2024-04-24 22:15:09.526421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.411 [2024-04-24 22:15:09.526442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.411 [2024-04-24 22:15:09.535713] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.411 [2024-04-24 22:15:09.535747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.411 [2024-04-24 22:15:09.535772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.411 [2024-04-24 22:15:09.544871] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.411 [2024-04-24 22:15:09.544904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.411 [2024-04-24 22:15:09.544927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.411 [2024-04-24 22:15:09.553996] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.411 [2024-04-24 22:15:09.554029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.411 [2024-04-24 22:15:09.554047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.411 [2024-04-24 22:15:09.563302] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.411 [2024-04-24 22:15:09.563335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.411 [2024-04-24 22:15:09.563362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.411 [2024-04-24 22:15:09.572825] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.411 [2024-04-24 22:15:09.572859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.411 [2024-04-24 22:15:09.572877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.411 [2024-04-24 22:15:09.581796] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.411 [2024-04-24 22:15:09.581829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.411 [2024-04-24 22:15:09.581848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.411 [2024-04-24 22:15:09.591321] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.411 [2024-04-24 22:15:09.591355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.411 [2024-04-24 22:15:09.591376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.411 [2024-04-24 22:15:09.600473] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.411 [2024-04-24 22:15:09.600507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.411 [2024-04-24 22:15:09.600526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.411 [2024-04-24 22:15:09.610047] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.411 [2024-04-24 22:15:09.610081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.411 [2024-04-24 22:15:09.610100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.411 [2024-04-24 22:15:09.619712] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.411 [2024-04-24 22:15:09.619746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.411 [2024-04-24 22:15:09.619770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.411 [2024-04-24 22:15:09.628815] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.411 [2024-04-24 22:15:09.628859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.411 [2024-04-24 22:15:09.628877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.411 [2024-04-24 22:15:09.638059] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.411 [2024-04-24 22:15:09.638103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.411 [2024-04-24 22:15:09.638121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:27.411 [2024-04-24 22:15:09.647276] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.411 [2024-04-24 22:15:09.647314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.411 [2024-04-24 22:15:09.647334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.411 [2024-04-24 22:15:09.656583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.411 [2024-04-24 22:15:09.656616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.411 [2024-04-24 22:15:09.656634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:27.669 [2024-04-24 22:15:09.667262] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x977c70) 00:23:27.669 [2024-04-24 22:15:09.667298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.669 [2024-04-24 22:15:09.667320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:27.669 00:23:27.669 Latency(us) 00:23:27.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.669 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:27.669 nvme0n1 : 2.00 3079.69 384.96 0.00 0.00 5191.31 1328.92 14272.28 00:23:27.669 =================================================================================================================== 00:23:27.669 Total : 3079.69 384.96 0.00 0.00 5191.31 1328.92 14272.28 00:23:27.669 0 00:23:27.669 22:15:09 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:27.669 22:15:09 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:27.669 22:15:09 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:27.669 | .driver_specific 00:23:27.669 | .nvme_error 00:23:27.669 | .status_code 00:23:27.669 | .command_transient_transport_error' 00:23:27.669 22:15:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:27.927 22:15:10 -- host/digest.sh@71 -- # (( 198 > 0 )) 00:23:27.927 22:15:10 -- host/digest.sh@73 -- # killprocess 4030925 00:23:27.927 22:15:10 -- common/autotest_common.sh@936 -- # '[' -z 4030925 ']' 00:23:27.927 22:15:10 -- common/autotest_common.sh@940 -- # kill -0 4030925 00:23:27.927 22:15:10 -- common/autotest_common.sh@941 -- # uname 00:23:27.927 22:15:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:27.927 22:15:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4030925 00:23:27.927 22:15:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:27.927 22:15:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:27.927 22:15:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4030925' 00:23:27.927 killing process with pid 4030925 00:23:27.927 22:15:10 -- common/autotest_common.sh@955 -- # kill 4030925 00:23:27.927 Received shutdown signal, test time was about 2.000000 seconds 00:23:27.927 00:23:27.927 Latency(us) 00:23:27.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.927 =================================================================================================================== 00:23:27.927 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:27.927 22:15:10 -- common/autotest_common.sh@960 -- # wait 4030925 00:23:28.493 22:15:10 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:23:28.493 22:15:10 -- host/digest.sh@54 -- # local rw bs qd 00:23:28.493 22:15:10 -- host/digest.sh@56 -- # rw=randwrite 00:23:28.493 22:15:10 -- host/digest.sh@56 -- # bs=4096 00:23:28.493 22:15:10 -- host/digest.sh@56 -- # qd=128 00:23:28.493 22:15:10 -- host/digest.sh@58 -- # bperfpid=4031850 00:23:28.493 22:15:10 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:23:28.493 22:15:10 -- host/digest.sh@60 -- # waitforlisten 4031850 /var/tmp/bperf.sock 00:23:28.493 22:15:10 -- common/autotest_common.sh@817 -- # '[' -z 4031850 ']' 00:23:28.493 22:15:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:28.493 22:15:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:28.493 22:15:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:28.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:28.493 22:15:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:28.493 22:15:10 -- common/autotest_common.sh@10 -- # set +x 00:23:28.493 [2024-04-24 22:15:10.507985] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:23:28.493 [2024-04-24 22:15:10.508080] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4031850 ] 00:23:28.493 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.493 [2024-04-24 22:15:10.577587] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.493 [2024-04-24 22:15:10.695711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.750 22:15:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:28.750 22:15:10 -- common/autotest_common.sh@850 -- # return 0 00:23:28.750 22:15:10 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:28.750 22:15:10 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:29.008 22:15:11 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:29.008 22:15:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.008 22:15:11 -- common/autotest_common.sh@10 -- # set +x 00:23:29.008 22:15:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.008 22:15:11 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:29.008 22:15:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:29.574 nvme0n1 00:23:29.574 22:15:11 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:29.574 22:15:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.574 22:15:11 -- common/autotest_common.sh@10 -- # set +x 00:23:29.574 22:15:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.575 22:15:11 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:29.575 22:15:11 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:29.575 Running I/O for 2 seconds... 00:23:29.575 [2024-04-24 22:15:11.817725] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:29.575 [2024-04-24 22:15:11.818050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.575 [2024-04-24 22:15:11.818094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.833 [2024-04-24 22:15:11.832181] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:29.833 [2024-04-24 22:15:11.832474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.833 [2024-04-24 22:15:11.832508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.833 [2024-04-24 22:15:11.846534] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:29.833 [2024-04-24 22:15:11.846824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.833 [2024-04-24 22:15:11.846857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.833 [2024-04-24 22:15:11.860916] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:29.833 [2024-04-24 22:15:11.861198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.833 [2024-04-24 22:15:11.861231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.833 [2024-04-24 22:15:11.875643] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:29.833 [2024-04-24 22:15:11.875979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.833 [2024-04-24 22:15:11.876011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.833 [2024-04-24 22:15:11.890365] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:29.833 [2024-04-24 22:15:11.890710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.833 [2024-04-24 22:15:11.890742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.833 [2024-04-24 22:15:11.905080] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:29.833 [2024-04-24 22:15:11.905419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.833 [2024-04-24 22:15:11.905450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.833 [2024-04-24 22:15:11.919627] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:29.833 [2024-04-24 22:15:11.919912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.833 [2024-04-24 22:15:11.919943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.833 [2024-04-24 22:15:11.933981] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:29.833 [2024-04-24 22:15:11.934297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.833 [2024-04-24 22:15:11.934328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.833 [2024-04-24 22:15:11.948313] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:29.833 [2024-04-24 22:15:11.948649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.833 [2024-04-24 22:15:11.948682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.833 [2024-04-24 22:15:11.962731] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:29.833 [2024-04-24 22:15:11.963063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.833 [2024-04-24 22:15:11.963095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.833 [2024-04-24 22:15:11.977285] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:29.833 [2024-04-24 22:15:11.977633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.833 [2024-04-24 22:15:11.977664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.833 [2024-04-24 22:15:11.991995] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:29.833 [2024-04-24 22:15:11.992330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.833 [2024-04-24 22:15:11.992361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.833 [2024-04-24 22:15:12.006623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:29.833 [2024-04-24 22:15:12.006964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.833 [2024-04-24 22:15:12.006995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.833 [2024-04-24 22:15:12.021253] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:29.833 [2024-04-24 22:15:12.021595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.833 [2024-04-24 22:15:12.021626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.833 [2024-04-24 22:15:12.035811] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:29.833 [2024-04-24 22:15:12.036143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.833 [2024-04-24 22:15:12.036174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.833 [2024-04-24 22:15:12.050514] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:29.833 [2024-04-24 22:15:12.050845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.833 [2024-04-24 22:15:12.050876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.833 [2024-04-24 22:15:12.065129] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:29.833 [2024-04-24 22:15:12.065470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.833 [2024-04-24 22:15:12.065501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.833 [2024-04-24 22:15:12.079705] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:29.834 [2024-04-24 22:15:12.080057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:29.834 [2024-04-24 22:15:12.080099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.092 [2024-04-24 22:15:12.094339] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.092 [2024-04-24 22:15:12.094662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-04-24 22:15:12.094693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.092 [2024-04-24 22:15:12.108978] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.092 [2024-04-24 22:15:12.109316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-04-24 22:15:12.109347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.092 [2024-04-24 22:15:12.123611] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.092 [2024-04-24 22:15:12.123951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-04-24 22:15:12.123981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.092 [2024-04-24 22:15:12.138142] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.092 [2024-04-24 22:15:12.138474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-04-24 22:15:12.138505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.092 [2024-04-24 22:15:12.152681] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.092 [2024-04-24 22:15:12.153026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-04-24 22:15:12.153056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.092 [2024-04-24 22:15:12.167308] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.092 [2024-04-24 22:15:12.167651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-04-24 22:15:12.167682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.092 [2024-04-24 22:15:12.181946] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.092 [2024-04-24 22:15:12.182287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-04-24 22:15:12.182317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.092 [2024-04-24 22:15:12.196630] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.092 [2024-04-24 22:15:12.196960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-04-24 22:15:12.196992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.092 [2024-04-24 22:15:12.211175] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.092 [2024-04-24 22:15:12.211506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-04-24 22:15:12.211537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.092 [2024-04-24 22:15:12.225779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.092 [2024-04-24 22:15:12.226119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-04-24 22:15:12.226157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.092 [2024-04-24 22:15:12.240462] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.092 [2024-04-24 22:15:12.240790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-04-24 22:15:12.240820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.092 [2024-04-24 22:15:12.255067] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.092 [2024-04-24 22:15:12.255404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.092 [2024-04-24 22:15:12.255435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.092 [2024-04-24 22:15:12.269718] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.092 [2024-04-24 22:15:12.270034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.093 [2024-04-24 22:15:12.270065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.093 [2024-04-24 22:15:12.284218] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.093 [2024-04-24 22:15:12.284545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.093 [2024-04-24 22:15:12.284576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.093 [2024-04-24 22:15:12.298770] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.093 [2024-04-24 22:15:12.299118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.093 [2024-04-24 22:15:12.299149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.093 [2024-04-24 22:15:12.313356] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.093 [2024-04-24 22:15:12.313698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.093 [2024-04-24 22:15:12.313730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.093 [2024-04-24 22:15:12.327922] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.093 [2024-04-24 22:15:12.328265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.093 [2024-04-24 22:15:12.328297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.093 [2024-04-24 22:15:12.342407] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.093 [2024-04-24 22:15:12.342748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.093 [2024-04-24 22:15:12.342779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.351 [2024-04-24 22:15:12.357006] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.351 [2024-04-24 22:15:12.357295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.351 [2024-04-24 22:15:12.357326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.351 [2024-04-24 22:15:12.371603] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.351 [2024-04-24 22:15:12.371938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.351 [2024-04-24 22:15:12.371969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.351 [2024-04-24 22:15:12.386185] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.351 [2024-04-24 22:15:12.386531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.351 [2024-04-24 22:15:12.386561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.351 [2024-04-24 22:15:12.400789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.351 [2024-04-24 22:15:12.401121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.351 [2024-04-24 22:15:12.401152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.351 [2024-04-24 22:15:12.415372] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.351 [2024-04-24 22:15:12.415680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.351 [2024-04-24 22:15:12.415711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.351 [2024-04-24 22:15:12.429976] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.351 [2024-04-24 22:15:12.430306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.351 [2024-04-24 22:15:12.430337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.351 [2024-04-24 22:15:12.444521] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.351 [2024-04-24 22:15:12.444862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.351 [2024-04-24 22:15:12.444893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.351 [2024-04-24 22:15:12.459026] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.351 [2024-04-24 22:15:12.459372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.351 [2024-04-24 22:15:12.459444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.351 [2024-04-24 22:15:12.473582] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.351 [2024-04-24 22:15:12.473937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.351 [2024-04-24 22:15:12.473970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.351 [2024-04-24 22:15:12.488142] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.351 [2024-04-24 22:15:12.488503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.351 [2024-04-24 22:15:12.488535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.351 [2024-04-24 22:15:12.502665] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.351 [2024-04-24 22:15:12.503026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.351 [2024-04-24 22:15:12.503058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.351 [2024-04-24 22:15:12.517257] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.351 [2024-04-24 22:15:12.517626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.351 [2024-04-24 22:15:12.517658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.352 [2024-04-24 22:15:12.531817] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.352 [2024-04-24 22:15:12.532167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.352 [2024-04-24 22:15:12.532198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.352 [2024-04-24 22:15:12.546367] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.352 [2024-04-24 22:15:12.546733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.352 [2024-04-24 22:15:12.546765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.352 [2024-04-24 22:15:12.560959] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.352 [2024-04-24 22:15:12.561320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.352 [2024-04-24 22:15:12.561350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.352 [2024-04-24 22:15:12.575512] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.352 [2024-04-24 22:15:12.575867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.352 [2024-04-24 22:15:12.575897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.352 [2024-04-24 22:15:12.590056] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.352 [2024-04-24 22:15:12.590412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.352 [2024-04-24 22:15:12.590445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.352 [2024-04-24 22:15:12.604619] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.352 [2024-04-24 22:15:12.604982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.352 [2024-04-24 22:15:12.605015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.610 [2024-04-24 22:15:12.619159] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.610 [2024-04-24 22:15:12.619513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.610 [2024-04-24 22:15:12.619545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.610 [2024-04-24 22:15:12.633749] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.610 [2024-04-24 22:15:12.634104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.610 [2024-04-24 22:15:12.634134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.610 [2024-04-24 22:15:12.648293] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.610 [2024-04-24 22:15:12.648664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.610 [2024-04-24 22:15:12.648694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.610 [2024-04-24 22:15:12.662869] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.610 [2024-04-24 22:15:12.663189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.610 [2024-04-24 22:15:12.663219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.610 [2024-04-24 22:15:12.677426] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.610 [2024-04-24 22:15:12.677746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.610 [2024-04-24 22:15:12.677777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.610 [2024-04-24 22:15:12.692009] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.610 [2024-04-24 22:15:12.692360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.610 [2024-04-24 22:15:12.692391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.610 [2024-04-24 22:15:12.706584] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.610 [2024-04-24 22:15:12.706934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.610 [2024-04-24 22:15:12.706965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.610 [2024-04-24 22:15:12.721103] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.610 [2024-04-24 22:15:12.721457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.610 [2024-04-24 22:15:12.721488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.610 [2024-04-24 22:15:12.735666] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.610 [2024-04-24 22:15:12.736020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.610 [2024-04-24 22:15:12.736056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.610 [2024-04-24 22:15:12.750161] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.610 [2024-04-24 22:15:12.750516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.610 [2024-04-24 22:15:12.750546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.610 [2024-04-24 22:15:12.764674] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.610 [2024-04-24 22:15:12.765026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.610 [2024-04-24 22:15:12.765057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.610 [2024-04-24 22:15:12.779201] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.610 [2024-04-24 22:15:12.779565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.610 [2024-04-24 22:15:12.779596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.610 [2024-04-24 22:15:12.793729] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.610 [2024-04-24 22:15:12.794081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.610 [2024-04-24 22:15:12.794112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.610 [2024-04-24 22:15:12.808304] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.610 [2024-04-24 22:15:12.808661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.610 [2024-04-24 22:15:12.808692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.610 [2024-04-24 22:15:12.822723] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.610 [2024-04-24 22:15:12.823034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.610 [2024-04-24 22:15:12.823063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.610 [2024-04-24 22:15:12.837126] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.610 [2024-04-24 22:15:12.837506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.610 [2024-04-24 22:15:12.837537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.610 [2024-04-24 22:15:12.851584] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.610 [2024-04-24 22:15:12.851914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.610 [2024-04-24 22:15:12.851954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.869 [2024-04-24 22:15:12.865984] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.869 [2024-04-24 22:15:12.866320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.869 [2024-04-24 22:15:12.866351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.869 [2024-04-24 22:15:12.880745] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.869 [2024-04-24 22:15:12.881030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.869 [2024-04-24 22:15:12.881061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.869 [2024-04-24 22:15:12.895222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.869 [2024-04-24 22:15:12.895516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.869 [2024-04-24 22:15:12.895547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.869 [2024-04-24 22:15:12.909645] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.869 [2024-04-24 22:15:12.909967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.869 [2024-04-24 22:15:12.909998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.869 [2024-04-24 22:15:12.924078] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.869 [2024-04-24 22:15:12.924480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.869 [2024-04-24 22:15:12.924511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.869 [2024-04-24 22:15:12.938553] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.869 [2024-04-24 22:15:12.938872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.869 [2024-04-24 22:15:12.938903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.869 [2024-04-24 22:15:12.953009] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.869 [2024-04-24 22:15:12.953368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.869 [2024-04-24 22:15:12.953406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.869 [2024-04-24 22:15:12.967498] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.869 [2024-04-24 22:15:12.967833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.869 [2024-04-24 22:15:12.967863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.869 [2024-04-24 22:15:12.982029] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.869 [2024-04-24 22:15:12.982356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.869 [2024-04-24 22:15:12.982386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.869 [2024-04-24 22:15:12.996581] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.869 [2024-04-24 22:15:12.996899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.869 [2024-04-24 22:15:12.996931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.869 [2024-04-24 22:15:13.011054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.869 [2024-04-24 22:15:13.011412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.869 [2024-04-24 22:15:13.011443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.869 [2024-04-24 22:15:13.025662] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.869 [2024-04-24 22:15:13.026028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.869 [2024-04-24 22:15:13.026059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.869 [2024-04-24 22:15:13.040129] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.869 [2024-04-24 22:15:13.040492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.869 [2024-04-24 22:15:13.040523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.869 [2024-04-24 22:15:13.054728] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.869 [2024-04-24 22:15:13.055083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.869 [2024-04-24 22:15:13.055113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.869 [2024-04-24 22:15:13.069299] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.869 [2024-04-24 22:15:13.069670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.869 [2024-04-24 22:15:13.069701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.869 [2024-04-24 22:15:13.083894] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.869 [2024-04-24 22:15:13.084257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.869 [2024-04-24 22:15:13.084288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.869 [2024-04-24 22:15:13.098420] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.869 [2024-04-24 22:15:13.098736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.869 [2024-04-24 22:15:13.098767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.869 [2024-04-24 22:15:13.112722] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:30.869 [2024-04-24 22:15:13.113040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.869 [2024-04-24 22:15:13.113075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.128 [2024-04-24 22:15:13.126969] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.128 [2024-04-24 22:15:13.127288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.128 [2024-04-24 22:15:13.127318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.128 [2024-04-24 22:15:13.141355] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.128 [2024-04-24 22:15:13.141696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.128 [2024-04-24 22:15:13.141727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.128 [2024-04-24 22:15:13.155819] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.128 [2024-04-24 22:15:13.156205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.128 [2024-04-24 22:15:13.156236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.128 [2024-04-24 22:15:13.170316] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.128 [2024-04-24 22:15:13.170680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.128 [2024-04-24 22:15:13.170712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.128 [2024-04-24 22:15:13.184728] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.128 [2024-04-24 22:15:13.185102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.128 [2024-04-24 22:15:13.185133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.128 [2024-04-24 22:15:13.199256] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.128 [2024-04-24 22:15:13.199567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.128 [2024-04-24 22:15:13.199599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.128 [2024-04-24 22:15:13.213810] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.128 [2024-04-24 22:15:13.214165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.128 [2024-04-24 22:15:13.214195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.128 [2024-04-24 22:15:13.228300] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.128 [2024-04-24 22:15:13.228668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.128 [2024-04-24 22:15:13.228698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.128 [2024-04-24 22:15:13.242870] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.128 [2024-04-24 22:15:13.243222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.128 [2024-04-24 22:15:13.243259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.128 [2024-04-24 22:15:13.257386] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.128 [2024-04-24 22:15:13.257716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.128 [2024-04-24 22:15:13.257746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.128 [2024-04-24 22:15:13.271823] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.128 [2024-04-24 22:15:13.272172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.128 [2024-04-24 22:15:13.272203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.128 [2024-04-24 22:15:13.286407] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.128 [2024-04-24 22:15:13.286788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.128 [2024-04-24 22:15:13.286819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.128 [2024-04-24 22:15:13.301032] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.128 [2024-04-24 22:15:13.301389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.128 [2024-04-24 22:15:13.301427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.128 [2024-04-24 22:15:13.315611] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.128 [2024-04-24 22:15:13.315976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.128 [2024-04-24 22:15:13.316007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.128 [2024-04-24 22:15:13.330279] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.128 [2024-04-24 22:15:13.330645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.128 [2024-04-24 22:15:13.330676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.128 [2024-04-24 22:15:13.344871] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.128 [2024-04-24 22:15:13.345225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.128 [2024-04-24 22:15:13.345255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.128 [2024-04-24 22:15:13.359460] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.128 [2024-04-24 22:15:13.359802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.128 [2024-04-24 22:15:13.359832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.128 [2024-04-24 22:15:13.374023] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.128 [2024-04-24 22:15:13.374399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.128 [2024-04-24 22:15:13.374431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.388 [2024-04-24 22:15:13.388671] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.388 [2024-04-24 22:15:13.389030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.388 [2024-04-24 22:15:13.389060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.388 [2024-04-24 22:15:13.403390] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.388 [2024-04-24 22:15:13.403736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.388 [2024-04-24 22:15:13.403767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.388 [2024-04-24 22:15:13.418090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.388 [2024-04-24 22:15:13.418450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.388 [2024-04-24 22:15:13.418480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.388 [2024-04-24 22:15:13.432703] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.388 [2024-04-24 22:15:13.433064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.388 [2024-04-24 22:15:13.433095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.388 [2024-04-24 22:15:13.447240] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.388 [2024-04-24 22:15:13.447610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.388 [2024-04-24 22:15:13.447641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.388 [2024-04-24 22:15:13.461948] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.388 [2024-04-24 22:15:13.462305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.388 [2024-04-24 22:15:13.462336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.388 [2024-04-24 22:15:13.476577] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.388 [2024-04-24 22:15:13.476933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.388 [2024-04-24 22:15:13.476963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.388 [2024-04-24 22:15:13.491229] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.388 [2024-04-24 22:15:13.491591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.388 [2024-04-24 22:15:13.491623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.388 [2024-04-24 22:15:13.505827] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.388 [2024-04-24 22:15:13.506184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.388 [2024-04-24 22:15:13.506214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.388 [2024-04-24 22:15:13.520458] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.388 [2024-04-24 22:15:13.520814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.388 [2024-04-24 22:15:13.520845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.388 [2024-04-24 22:15:13.535011] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.388 [2024-04-24 22:15:13.535364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.388 [2024-04-24 22:15:13.535403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.388 [2024-04-24 22:15:13.549602] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.388 [2024-04-24 22:15:13.549965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.389 [2024-04-24 22:15:13.549996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.389 [2024-04-24 22:15:13.564189] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.389 [2024-04-24 22:15:13.564561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.389 [2024-04-24 22:15:13.564593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.389 [2024-04-24 22:15:13.578720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.389 [2024-04-24 22:15:13.579073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.389 [2024-04-24 22:15:13.579103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.389 [2024-04-24 22:15:13.593287] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.389 [2024-04-24 22:15:13.593650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.389 [2024-04-24 22:15:13.593681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.389 [2024-04-24 22:15:13.607860] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.389 [2024-04-24 22:15:13.608220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.389 [2024-04-24 22:15:13.608251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.389 [2024-04-24 22:15:13.622500] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.389 [2024-04-24 22:15:13.622851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.389 [2024-04-24 22:15:13.622889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.389 [2024-04-24 22:15:13.637069] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.389 [2024-04-24 22:15:13.637439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.389 [2024-04-24 22:15:13.637469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.648 [2024-04-24 22:15:13.651730] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.648 [2024-04-24 22:15:13.652089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.648 [2024-04-24 22:15:13.652119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.648 [2024-04-24 22:15:13.666350] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.648 [2024-04-24 22:15:13.666714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.648 [2024-04-24 22:15:13.666745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.648 [2024-04-24 22:15:13.680957] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.648 [2024-04-24 22:15:13.681310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.648 [2024-04-24 22:15:13.681340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.648 [2024-04-24 22:15:13.695581] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.648 [2024-04-24 22:15:13.695941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.648 [2024-04-24 22:15:13.695971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.648 [2024-04-24 22:15:13.710208] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.648 [2024-04-24 22:15:13.710545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.648 [2024-04-24 22:15:13.710576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.648 [2024-04-24 22:15:13.724854] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.648 [2024-04-24 22:15:13.725205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.648 [2024-04-24 22:15:13.725235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.648 [2024-04-24 22:15:13.739426] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.648 [2024-04-24 22:15:13.739784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.648 [2024-04-24 22:15:13.739814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.648 [2024-04-24 22:15:13.753954] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.648 [2024-04-24 22:15:13.754307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.648 [2024-04-24 22:15:13.754344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.648 [2024-04-24 22:15:13.768536] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.648 [2024-04-24 22:15:13.768891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.648 [2024-04-24 22:15:13.768921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.648 [2024-04-24 22:15:13.783054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.648 [2024-04-24 22:15:13.783418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.648 [2024-04-24 22:15:13.783449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.648 [2024-04-24 22:15:13.797718] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef130) with pdu=0x2000190fb8b8 00:23:31.648 [2024-04-24 22:15:13.798070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.648 [2024-04-24 22:15:13.798101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.648 00:23:31.648 Latency(us) 00:23:31.648 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.648 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:31.648 nvme0n1 : 2.01 17516.19 68.42 0.00 0.00 7290.12 4878.79 14854.83 00:23:31.648 =================================================================================================================== 00:23:31.648 Total : 17516.19 68.42 0.00 0.00 7290.12 4878.79 14854.83 00:23:31.648 0 00:23:31.648 22:15:13 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:31.648 22:15:13 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:31.648 22:15:13 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:31.648 | .driver_specific 00:23:31.648 | .nvme_error 00:23:31.648 | .status_code 00:23:31.648 | .command_transient_transport_error' 00:23:31.648 22:15:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:31.906 22:15:14 -- host/digest.sh@71 -- # (( 137 > 0 )) 00:23:31.906 22:15:14 -- host/digest.sh@73 -- # killprocess 4031850 00:23:31.906 22:15:14 -- common/autotest_common.sh@936 -- # '[' -z 4031850 ']' 00:23:31.906 22:15:14 -- common/autotest_common.sh@940 -- # kill -0 4031850 00:23:31.906 22:15:14 -- common/autotest_common.sh@941 -- # uname 00:23:31.906 22:15:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:31.906 22:15:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4031850 00:23:32.165 22:15:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:32.165 22:15:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:32.165 22:15:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4031850' 00:23:32.165 killing process with pid 4031850 00:23:32.165 22:15:14 -- common/autotest_common.sh@955 -- # kill 4031850 00:23:32.165 Received shutdown signal, test time was about 2.000000 seconds 00:23:32.165 00:23:32.165 Latency(us) 00:23:32.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.165 =================================================================================================================== 00:23:32.165 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:32.165 22:15:14 -- common/autotest_common.sh@960 -- # wait 4031850 00:23:32.424 22:15:14 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:23:32.424 22:15:14 -- host/digest.sh@54 -- # local rw bs qd 00:23:32.424 22:15:14 -- host/digest.sh@56 -- # rw=randwrite 00:23:32.424 22:15:14 -- host/digest.sh@56 -- # bs=131072 00:23:32.424 22:15:14 -- host/digest.sh@56 -- # qd=16 00:23:32.424 22:15:14 -- host/digest.sh@58 -- # bperfpid=4032380 00:23:32.424 22:15:14 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:23:32.424 22:15:14 -- host/digest.sh@60 -- # waitforlisten 4032380 /var/tmp/bperf.sock 00:23:32.424 22:15:14 -- common/autotest_common.sh@817 -- # '[' -z 4032380 ']' 00:23:32.424 22:15:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:32.424 22:15:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:32.424 22:15:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:32.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:32.424 22:15:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:32.424 22:15:14 -- common/autotest_common.sh@10 -- # set +x 00:23:32.424 [2024-04-24 22:15:14.557549] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:23:32.424 [2024-04-24 22:15:14.557716] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4032380 ] 00:23:32.424 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:32.424 Zero copy mechanism will not be used. 00:23:32.424 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.424 [2024-04-24 22:15:14.667916] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.683 [2024-04-24 22:15:14.786391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.941 22:15:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:32.941 22:15:15 -- common/autotest_common.sh@850 -- # return 0 00:23:32.941 22:15:15 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:32.941 22:15:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:33.199 22:15:15 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:33.199 22:15:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.199 22:15:15 -- common/autotest_common.sh@10 -- # set +x 00:23:33.199 22:15:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.199 22:15:15 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:33.199 22:15:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:33.765 nvme0n1 00:23:33.765 22:15:16 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:33.765 22:15:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.765 22:15:16 -- common/autotest_common.sh@10 -- # set +x 00:23:34.023 22:15:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.023 22:15:16 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:34.023 22:15:16 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:34.023 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:34.023 Zero copy mechanism will not be used. 00:23:34.023 Running I/O for 2 seconds... 00:23:34.023 [2024-04-24 22:15:16.254219] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.023 [2024-04-24 22:15:16.254619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.023 [2024-04-24 22:15:16.254672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.023 [2024-04-24 22:15:16.265021] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.023 [2024-04-24 22:15:16.265466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.023 [2024-04-24 22:15:16.265501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.023 [2024-04-24 22:15:16.274882] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.023 [2024-04-24 22:15:16.275252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.023 [2024-04-24 22:15:16.275286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.283 [2024-04-24 22:15:16.284376] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.283 [2024-04-24 22:15:16.284533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.283 [2024-04-24 22:15:16.284565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.283 [2024-04-24 22:15:16.294213] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.283 [2024-04-24 22:15:16.294576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.283 [2024-04-24 22:15:16.294609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.283 [2024-04-24 22:15:16.305009] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.283 [2024-04-24 22:15:16.305381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.283 [2024-04-24 22:15:16.305422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.283 [2024-04-24 22:15:16.315354] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.283 [2024-04-24 22:15:16.315720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.283 [2024-04-24 22:15:16.315753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.283 [2024-04-24 22:15:16.325302] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.283 [2024-04-24 22:15:16.325686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.283 [2024-04-24 22:15:16.325719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.283 [2024-04-24 22:15:16.335556] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.283 [2024-04-24 22:15:16.335944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.283 [2024-04-24 22:15:16.335976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.283 [2024-04-24 22:15:16.345504] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.283 [2024-04-24 22:15:16.345858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.283 [2024-04-24 22:15:16.345890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.283 [2024-04-24 22:15:16.356488] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.283 [2024-04-24 22:15:16.356839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.283 [2024-04-24 22:15:16.356872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.283 [2024-04-24 22:15:16.367645] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.283 [2024-04-24 22:15:16.367997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.283 [2024-04-24 22:15:16.368028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.283 [2024-04-24 22:15:16.378355] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.283 [2024-04-24 22:15:16.378771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.283 [2024-04-24 22:15:16.378815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.283 [2024-04-24 22:15:16.388627] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.283 [2024-04-24 22:15:16.388976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.283 [2024-04-24 22:15:16.389008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.283 [2024-04-24 22:15:16.397992] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.283 [2024-04-24 22:15:16.398349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.283 [2024-04-24 22:15:16.398381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.283 [2024-04-24 22:15:16.408190] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.283 [2024-04-24 22:15:16.408584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.283 [2024-04-24 22:15:16.408617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.283 [2024-04-24 22:15:16.417675] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.283 [2024-04-24 22:15:16.418041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.283 [2024-04-24 22:15:16.418072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.283 [2024-04-24 22:15:16.427516] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.283 [2024-04-24 22:15:16.427897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.283 [2024-04-24 22:15:16.427939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.283 [2024-04-24 22:15:16.437520] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.283 [2024-04-24 22:15:16.437919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.283 [2024-04-24 22:15:16.437961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.283 [2024-04-24 22:15:16.448084] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.283 [2024-04-24 22:15:16.448494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.283 [2024-04-24 22:15:16.448526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.283 [2024-04-24 22:15:16.457981] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.283 [2024-04-24 22:15:16.458349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.283 [2024-04-24 22:15:16.458380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.283 [2024-04-24 22:15:16.468227] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.283 [2024-04-24 22:15:16.468677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.283 [2024-04-24 22:15:16.468710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.283 [2024-04-24 22:15:16.478257] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.283 [2024-04-24 22:15:16.478658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.283 [2024-04-24 22:15:16.478691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.283 [2024-04-24 22:15:16.488582] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.283 [2024-04-24 22:15:16.488997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.283 [2024-04-24 22:15:16.489038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.283 [2024-04-24 22:15:16.498971] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.284 [2024-04-24 22:15:16.499342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.284 [2024-04-24 22:15:16.499375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.284 [2024-04-24 22:15:16.509198] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.284 [2024-04-24 22:15:16.509616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.284 [2024-04-24 22:15:16.509648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.284 [2024-04-24 22:15:16.519498] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.284 [2024-04-24 22:15:16.519896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.284 [2024-04-24 22:15:16.519928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.284 [2024-04-24 22:15:16.529313] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.284 [2024-04-24 22:15:16.529669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.284 [2024-04-24 22:15:16.529701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.542 [2024-04-24 22:15:16.539295] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.542 [2024-04-24 22:15:16.539438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-04-24 22:15:16.539468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.543 [2024-04-24 22:15:16.550126] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.543 [2024-04-24 22:15:16.550503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-04-24 22:15:16.550536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.543 [2024-04-24 22:15:16.560416] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.543 [2024-04-24 22:15:16.560788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-04-24 22:15:16.560830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.543 [2024-04-24 22:15:16.570231] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.543 [2024-04-24 22:15:16.570441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-04-24 22:15:16.570474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.543 [2024-04-24 22:15:16.580222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.543 [2024-04-24 22:15:16.580617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-04-24 22:15:16.580650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.543 [2024-04-24 22:15:16.589534] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.543 [2024-04-24 22:15:16.589868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-04-24 22:15:16.589900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.543 [2024-04-24 22:15:16.599025] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.543 [2024-04-24 22:15:16.599372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-04-24 22:15:16.599412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.543 [2024-04-24 22:15:16.608515] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.543 [2024-04-24 22:15:16.608851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-04-24 22:15:16.608892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.543 [2024-04-24 22:15:16.617906] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.543 [2024-04-24 22:15:16.618335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-04-24 22:15:16.618367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.543 [2024-04-24 22:15:16.627273] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.543 [2024-04-24 22:15:16.627627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-04-24 22:15:16.627659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.543 [2024-04-24 22:15:16.636848] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.543 [2024-04-24 22:15:16.637216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-04-24 22:15:16.637248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.543 [2024-04-24 22:15:16.646155] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.543 [2024-04-24 22:15:16.646501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-04-24 22:15:16.646544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.543 [2024-04-24 22:15:16.655953] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.543 [2024-04-24 22:15:16.656329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-04-24 22:15:16.656361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.543 [2024-04-24 22:15:16.665046] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.543 [2024-04-24 22:15:16.665400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-04-24 22:15:16.665441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.543 [2024-04-24 22:15:16.674431] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.543 [2024-04-24 22:15:16.674829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-04-24 22:15:16.674861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.543 [2024-04-24 22:15:16.683472] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.543 [2024-04-24 22:15:16.683806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-04-24 22:15:16.683839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.543 [2024-04-24 22:15:16.692645] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.543 [2024-04-24 22:15:16.693060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-04-24 22:15:16.693102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.543 [2024-04-24 22:15:16.702286] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.543 [2024-04-24 22:15:16.702612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-04-24 22:15:16.702645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.543 [2024-04-24 22:15:16.711722] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.543 [2024-04-24 22:15:16.712064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-04-24 22:15:16.712097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.543 [2024-04-24 22:15:16.720637] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.543 [2024-04-24 22:15:16.721028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-04-24 22:15:16.721070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.543 [2024-04-24 22:15:16.730020] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.543 [2024-04-24 22:15:16.730382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-04-24 22:15:16.730421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.543 [2024-04-24 22:15:16.739379] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.543 [2024-04-24 22:15:16.739761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-04-24 22:15:16.739803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.543 [2024-04-24 22:15:16.748902] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.543 [2024-04-24 22:15:16.749270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-04-24 22:15:16.749302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.543 [2024-04-24 22:15:16.758267] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.543 [2024-04-24 22:15:16.758649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-04-24 22:15:16.758681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.543 [2024-04-24 22:15:16.767675] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.543 [2024-04-24 22:15:16.768032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-04-24 22:15:16.768064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.543 [2024-04-24 22:15:16.777659] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.543 [2024-04-24 22:15:16.778076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-04-24 22:15:16.778107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.543 [2024-04-24 22:15:16.787884] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.543 [2024-04-24 22:15:16.788301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.543 [2024-04-24 22:15:16.788333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.543 [2024-04-24 22:15:16.797847] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.803 [2024-04-24 22:15:16.798223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.803 [2024-04-24 22:15:16.798254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.803 [2024-04-24 22:15:16.807772] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.803 [2024-04-24 22:15:16.808128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.803 [2024-04-24 22:15:16.808160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.803 [2024-04-24 22:15:16.818030] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.803 [2024-04-24 22:15:16.818349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.803 [2024-04-24 22:15:16.818382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.803 [2024-04-24 22:15:16.827221] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.803 [2024-04-24 22:15:16.827673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.803 [2024-04-24 22:15:16.827710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.803 [2024-04-24 22:15:16.837430] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.803 [2024-04-24 22:15:16.837813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.803 [2024-04-24 22:15:16.837853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.803 [2024-04-24 22:15:16.847030] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.803 [2024-04-24 22:15:16.847363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.803 [2024-04-24 22:15:16.847403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.803 [2024-04-24 22:15:16.856325] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.803 [2024-04-24 22:15:16.856692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.803 [2024-04-24 22:15:16.856743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.803 [2024-04-24 22:15:16.865652] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.804 [2024-04-24 22:15:16.866003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.804 [2024-04-24 22:15:16.866035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.804 [2024-04-24 22:15:16.874694] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.804 [2024-04-24 22:15:16.875011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.804 [2024-04-24 22:15:16.875043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.804 [2024-04-24 22:15:16.883782] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.804 [2024-04-24 22:15:16.884129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.804 [2024-04-24 22:15:16.884170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.804 [2024-04-24 22:15:16.893226] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.804 [2024-04-24 22:15:16.893582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.804 [2024-04-24 22:15:16.893620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.804 [2024-04-24 22:15:16.902234] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.804 [2024-04-24 22:15:16.902611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.804 [2024-04-24 22:15:16.902645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.804 [2024-04-24 22:15:16.911483] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.804 [2024-04-24 22:15:16.911858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.804 [2024-04-24 22:15:16.911895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.804 [2024-04-24 22:15:16.921738] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.804 [2024-04-24 22:15:16.922145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.804 [2024-04-24 22:15:16.922185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.804 [2024-04-24 22:15:16.931585] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.804 [2024-04-24 22:15:16.931980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.804 [2024-04-24 22:15:16.932012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.804 [2024-04-24 22:15:16.941513] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.804 [2024-04-24 22:15:16.941846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.804 [2024-04-24 22:15:16.941878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.804 [2024-04-24 22:15:16.951094] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.804 [2024-04-24 22:15:16.951423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.804 [2024-04-24 22:15:16.951455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.804 [2024-04-24 22:15:16.960570] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.804 [2024-04-24 22:15:16.960932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.804 [2024-04-24 22:15:16.960964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.804 [2024-04-24 22:15:16.970222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.804 [2024-04-24 22:15:16.970560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.804 [2024-04-24 22:15:16.970592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.804 [2024-04-24 22:15:16.979668] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.804 [2024-04-24 22:15:16.979986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.804 [2024-04-24 22:15:16.980018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.804 [2024-04-24 22:15:16.988535] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.804 [2024-04-24 22:15:16.988880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.804 [2024-04-24 22:15:16.988913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.804 [2024-04-24 22:15:16.998041] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.804 [2024-04-24 22:15:16.998360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.804 [2024-04-24 22:15:16.998392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.804 [2024-04-24 22:15:17.007244] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.804 [2024-04-24 22:15:17.007598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.804 [2024-04-24 22:15:17.007630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.804 [2024-04-24 22:15:17.016163] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.804 [2024-04-24 22:15:17.016534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.804 [2024-04-24 22:15:17.016567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.804 [2024-04-24 22:15:17.025389] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.804 [2024-04-24 22:15:17.025760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.804 [2024-04-24 22:15:17.025792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:34.804 [2024-04-24 22:15:17.034836] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.804 [2024-04-24 22:15:17.035171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.804 [2024-04-24 22:15:17.035203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:34.804 [2024-04-24 22:15:17.044352] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.804 [2024-04-24 22:15:17.044697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.804 [2024-04-24 22:15:17.044729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:34.804 [2024-04-24 22:15:17.053770] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:34.804 [2024-04-24 22:15:17.054089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.804 [2024-04-24 22:15:17.054133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.064 [2024-04-24 22:15:17.063430] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.064 [2024-04-24 22:15:17.063835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.064 [2024-04-24 22:15:17.063878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.064 [2024-04-24 22:15:17.073191] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.064 [2024-04-24 22:15:17.073540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.064 [2024-04-24 22:15:17.073572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.064 [2024-04-24 22:15:17.082661] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.064 [2024-04-24 22:15:17.082994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.064 [2024-04-24 22:15:17.083025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.064 [2024-04-24 22:15:17.092181] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.064 [2024-04-24 22:15:17.092601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.064 [2024-04-24 22:15:17.092632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.064 [2024-04-24 22:15:17.101031] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.064 [2024-04-24 22:15:17.101417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.064 [2024-04-24 22:15:17.101466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.064 [2024-04-24 22:15:17.110849] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.064 [2024-04-24 22:15:17.111260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.064 [2024-04-24 22:15:17.111292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.064 [2024-04-24 22:15:17.120335] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.064 [2024-04-24 22:15:17.120766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.064 [2024-04-24 22:15:17.120798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.064 [2024-04-24 22:15:17.130514] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.064 [2024-04-24 22:15:17.130869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.064 [2024-04-24 22:15:17.130901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.064 [2024-04-24 22:15:17.140119] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.064 [2024-04-24 22:15:17.140493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.064 [2024-04-24 22:15:17.140535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.064 [2024-04-24 22:15:17.150059] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.064 [2024-04-24 22:15:17.150446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.064 [2024-04-24 22:15:17.150479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.064 [2024-04-24 22:15:17.159603] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.064 [2024-04-24 22:15:17.159937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.064 [2024-04-24 22:15:17.159970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.064 [2024-04-24 22:15:17.169362] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.064 [2024-04-24 22:15:17.169695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.064 [2024-04-24 22:15:17.169729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.064 [2024-04-24 22:15:17.179207] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.064 [2024-04-24 22:15:17.179620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.064 [2024-04-24 22:15:17.179652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.064 [2024-04-24 22:15:17.189532] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.064 [2024-04-24 22:15:17.189934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.064 [2024-04-24 22:15:17.189977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.064 [2024-04-24 22:15:17.199534] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.064 [2024-04-24 22:15:17.199900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.064 [2024-04-24 22:15:17.199933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.064 [2024-04-24 22:15:17.208566] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.064 [2024-04-24 22:15:17.208902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.064 [2024-04-24 22:15:17.208935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.064 [2024-04-24 22:15:17.218235] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.064 [2024-04-24 22:15:17.218672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.064 [2024-04-24 22:15:17.218704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.064 [2024-04-24 22:15:17.228622] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.064 [2024-04-24 22:15:17.229048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.064 [2024-04-24 22:15:17.229090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.064 [2024-04-24 22:15:17.238008] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.064 [2024-04-24 22:15:17.238430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.064 [2024-04-24 22:15:17.238462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.064 [2024-04-24 22:15:17.247384] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.064 [2024-04-24 22:15:17.247811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.064 [2024-04-24 22:15:17.247842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.064 [2024-04-24 22:15:17.257077] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.064 [2024-04-24 22:15:17.257421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.065 [2024-04-24 22:15:17.257461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.065 [2024-04-24 22:15:17.266441] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.065 [2024-04-24 22:15:17.266810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.065 [2024-04-24 22:15:17.266841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.065 [2024-04-24 22:15:17.276883] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.065 [2024-04-24 22:15:17.277325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.065 [2024-04-24 22:15:17.277358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.065 [2024-04-24 22:15:17.286715] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.065 [2024-04-24 22:15:17.287089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.065 [2024-04-24 22:15:17.287121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.065 [2024-04-24 22:15:17.296207] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.065 [2024-04-24 22:15:17.296619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.065 [2024-04-24 22:15:17.296650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.065 [2024-04-24 22:15:17.305733] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.065 [2024-04-24 22:15:17.306070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.065 [2024-04-24 22:15:17.306108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.065 [2024-04-24 22:15:17.314700] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.065 [2024-04-24 22:15:17.315106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.065 [2024-04-24 22:15:17.315138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.324 [2024-04-24 22:15:17.324013] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.324 [2024-04-24 22:15:17.324375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.324 [2024-04-24 22:15:17.324420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.324 [2024-04-24 22:15:17.333329] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.324 [2024-04-24 22:15:17.333671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.324 [2024-04-24 22:15:17.333702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.324 [2024-04-24 22:15:17.342461] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.324 [2024-04-24 22:15:17.342812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.325 [2024-04-24 22:15:17.342844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.325 [2024-04-24 22:15:17.351750] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.325 [2024-04-24 22:15:17.352107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.325 [2024-04-24 22:15:17.352148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.325 [2024-04-24 22:15:17.361243] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.325 [2024-04-24 22:15:17.361630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.325 [2024-04-24 22:15:17.361670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.325 [2024-04-24 22:15:17.370491] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.325 [2024-04-24 22:15:17.370829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.325 [2024-04-24 22:15:17.370862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.325 [2024-04-24 22:15:17.379769] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.325 [2024-04-24 22:15:17.380105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.325 [2024-04-24 22:15:17.380138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.325 [2024-04-24 22:15:17.389216] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.325 [2024-04-24 22:15:17.389612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.325 [2024-04-24 22:15:17.389655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.325 [2024-04-24 22:15:17.398819] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.325 [2024-04-24 22:15:17.399135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.325 [2024-04-24 22:15:17.399178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.325 [2024-04-24 22:15:17.407841] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.325 [2024-04-24 22:15:17.408190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.325 [2024-04-24 22:15:17.408222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.325 [2024-04-24 22:15:17.416822] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.325 [2024-04-24 22:15:17.417166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.325 [2024-04-24 22:15:17.417201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.325 [2024-04-24 22:15:17.426114] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.325 [2024-04-24 22:15:17.426457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.325 [2024-04-24 22:15:17.426489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.325 [2024-04-24 22:15:17.435213] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.325 [2024-04-24 22:15:17.435549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.325 [2024-04-24 22:15:17.435581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.325 [2024-04-24 22:15:17.444319] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.325 [2024-04-24 22:15:17.444694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.325 [2024-04-24 22:15:17.444726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.325 [2024-04-24 22:15:17.453475] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.325 [2024-04-24 22:15:17.453792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.325 [2024-04-24 22:15:17.453824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.325 [2024-04-24 22:15:17.462023] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.325 [2024-04-24 22:15:17.462350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.325 [2024-04-24 22:15:17.462382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.325 [2024-04-24 22:15:17.470960] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.325 [2024-04-24 22:15:17.471309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.325 [2024-04-24 22:15:17.471353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.325 [2024-04-24 22:15:17.480272] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.325 [2024-04-24 22:15:17.480717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.325 [2024-04-24 22:15:17.480750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.325 [2024-04-24 22:15:17.489139] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.325 [2024-04-24 22:15:17.489478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.325 [2024-04-24 22:15:17.489510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.325 [2024-04-24 22:15:17.498210] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.325 [2024-04-24 22:15:17.498539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.325 [2024-04-24 22:15:17.498572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.325 [2024-04-24 22:15:17.506801] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.325 [2024-04-24 22:15:17.507134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.325 [2024-04-24 22:15:17.507167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.325 [2024-04-24 22:15:17.515517] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.325 [2024-04-24 22:15:17.515834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.325 [2024-04-24 22:15:17.515867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.325 [2024-04-24 22:15:17.524715] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.325 [2024-04-24 22:15:17.525135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.325 [2024-04-24 22:15:17.525168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.325 [2024-04-24 22:15:17.533901] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.325 [2024-04-24 22:15:17.534325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.325 [2024-04-24 22:15:17.534357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.325 [2024-04-24 22:15:17.542820] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.325 [2024-04-24 22:15:17.543176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.325 [2024-04-24 22:15:17.543208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.325 [2024-04-24 22:15:17.552963] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.325 [2024-04-24 22:15:17.553358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.325 [2024-04-24 22:15:17.553390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.325 [2024-04-24 22:15:17.561822] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.325 [2024-04-24 22:15:17.562133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.326 [2024-04-24 22:15:17.562166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.326 [2024-04-24 22:15:17.570906] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.326 [2024-04-24 22:15:17.571226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.326 [2024-04-24 22:15:17.571270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.585 [2024-04-24 22:15:17.580261] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.585 [2024-04-24 22:15:17.580607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.585 [2024-04-24 22:15:17.580640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.585 [2024-04-24 22:15:17.589344] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.585 [2024-04-24 22:15:17.589684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.585 [2024-04-24 22:15:17.589736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.585 [2024-04-24 22:15:17.598710] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.585 [2024-04-24 22:15:17.599041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.585 [2024-04-24 22:15:17.599073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.585 [2024-04-24 22:15:17.608358] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.585 [2024-04-24 22:15:17.608837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.585 [2024-04-24 22:15:17.608869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.585 [2024-04-24 22:15:17.617647] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.585 [2024-04-24 22:15:17.618051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.585 [2024-04-24 22:15:17.618083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.585 [2024-04-24 22:15:17.627038] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.585 [2024-04-24 22:15:17.627426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.585 [2024-04-24 22:15:17.627463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.585 [2024-04-24 22:15:17.636150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.586 [2024-04-24 22:15:17.636476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.586 [2024-04-24 22:15:17.636509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.586 [2024-04-24 22:15:17.645511] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.586 [2024-04-24 22:15:17.645946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.586 [2024-04-24 22:15:17.645977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.586 [2024-04-24 22:15:17.654773] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.586 [2024-04-24 22:15:17.655104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.586 [2024-04-24 22:15:17.655136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.586 [2024-04-24 22:15:17.663330] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.586 [2024-04-24 22:15:17.663687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.586 [2024-04-24 22:15:17.663723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.586 [2024-04-24 22:15:17.672742] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.586 [2024-04-24 22:15:17.673099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.586 [2024-04-24 22:15:17.673132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.586 [2024-04-24 22:15:17.682312] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.586 [2024-04-24 22:15:17.682668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.586 [2024-04-24 22:15:17.682711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.586 [2024-04-24 22:15:17.691541] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.586 [2024-04-24 22:15:17.691905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.586 [2024-04-24 22:15:17.691945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.586 [2024-04-24 22:15:17.700201] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.586 [2024-04-24 22:15:17.700589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.586 [2024-04-24 22:15:17.700626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.586 [2024-04-24 22:15:17.709268] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.586 [2024-04-24 22:15:17.709594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.586 [2024-04-24 22:15:17.709626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.586 [2024-04-24 22:15:17.718236] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.586 [2024-04-24 22:15:17.718584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.586 [2024-04-24 22:15:17.718616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.586 [2024-04-24 22:15:17.727550] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.586 [2024-04-24 22:15:17.727884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.586 [2024-04-24 22:15:17.727927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.586 [2024-04-24 22:15:17.736295] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.586 [2024-04-24 22:15:17.736640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.586 [2024-04-24 22:15:17.736672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.586 [2024-04-24 22:15:17.745384] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.586 [2024-04-24 22:15:17.745750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.586 [2024-04-24 22:15:17.745782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.586 [2024-04-24 22:15:17.754421] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.586 [2024-04-24 22:15:17.754794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.586 [2024-04-24 22:15:17.754825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.586 [2024-04-24 22:15:17.763404] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.586 [2024-04-24 22:15:17.763735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.586 [2024-04-24 22:15:17.763767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.586 [2024-04-24 22:15:17.771814] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.586 [2024-04-24 22:15:17.772149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.586 [2024-04-24 22:15:17.772181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.586 [2024-04-24 22:15:17.781203] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.586 [2024-04-24 22:15:17.781573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.586 [2024-04-24 22:15:17.781606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.586 [2024-04-24 22:15:17.789844] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.586 [2024-04-24 22:15:17.790164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.586 [2024-04-24 22:15:17.790196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.586 [2024-04-24 22:15:17.798622] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.587 [2024-04-24 22:15:17.799071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.587 [2024-04-24 22:15:17.799102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.587 [2024-04-24 22:15:17.807976] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.587 [2024-04-24 22:15:17.808405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.587 [2024-04-24 22:15:17.808436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.587 [2024-04-24 22:15:17.816775] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.587 [2024-04-24 22:15:17.817102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.587 [2024-04-24 22:15:17.817135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.587 [2024-04-24 22:15:17.825230] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.587 [2024-04-24 22:15:17.825559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.587 [2024-04-24 22:15:17.825602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.587 [2024-04-24 22:15:17.833672] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.587 [2024-04-24 22:15:17.834027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.587 [2024-04-24 22:15:17.834059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.847 [2024-04-24 22:15:17.841955] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.847 [2024-04-24 22:15:17.842313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.847 [2024-04-24 22:15:17.842345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.847 [2024-04-24 22:15:17.850591] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.847 [2024-04-24 22:15:17.850913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.847 [2024-04-24 22:15:17.850944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.847 [2024-04-24 22:15:17.859174] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.847 [2024-04-24 22:15:17.859500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.847 [2024-04-24 22:15:17.859532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.847 [2024-04-24 22:15:17.868507] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.847 [2024-04-24 22:15:17.868854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.847 [2024-04-24 22:15:17.868886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.847 [2024-04-24 22:15:17.877665] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.847 [2024-04-24 22:15:17.878011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.847 [2024-04-24 22:15:17.878043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.847 [2024-04-24 22:15:17.886404] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.847 [2024-04-24 22:15:17.886864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.847 [2024-04-24 22:15:17.886896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.847 [2024-04-24 22:15:17.895391] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.847 [2024-04-24 22:15:17.895719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.847 [2024-04-24 22:15:17.895751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.847 [2024-04-24 22:15:17.904536] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.847 [2024-04-24 22:15:17.904872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.847 [2024-04-24 22:15:17.904905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.847 [2024-04-24 22:15:17.913796] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.847 [2024-04-24 22:15:17.914132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.847 [2024-04-24 22:15:17.914164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.847 [2024-04-24 22:15:17.923035] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.847 [2024-04-24 22:15:17.923403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.847 [2024-04-24 22:15:17.923435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.847 [2024-04-24 22:15:17.932869] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.847 [2024-04-24 22:15:17.933218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.847 [2024-04-24 22:15:17.933250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.847 [2024-04-24 22:15:17.941994] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.847 [2024-04-24 22:15:17.942338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.847 [2024-04-24 22:15:17.942369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.847 [2024-04-24 22:15:17.951372] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.847 [2024-04-24 22:15:17.951714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.847 [2024-04-24 22:15:17.951746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.847 [2024-04-24 22:15:17.960587] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.847 [2024-04-24 22:15:17.960971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.847 [2024-04-24 22:15:17.961003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.847 [2024-04-24 22:15:17.969772] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.847 [2024-04-24 22:15:17.970120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.847 [2024-04-24 22:15:17.970151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.847 [2024-04-24 22:15:17.979571] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.847 [2024-04-24 22:15:17.979904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.847 [2024-04-24 22:15:17.979954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.847 [2024-04-24 22:15:17.989161] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.848 [2024-04-24 22:15:17.989507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.848 [2024-04-24 22:15:17.989540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.848 [2024-04-24 22:15:17.998530] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.848 [2024-04-24 22:15:17.998851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.848 [2024-04-24 22:15:17.998883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.848 [2024-04-24 22:15:18.007436] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.848 [2024-04-24 22:15:18.007773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.848 [2024-04-24 22:15:18.007806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.848 [2024-04-24 22:15:18.016892] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.848 [2024-04-24 22:15:18.017317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.848 [2024-04-24 22:15:18.017348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.848 [2024-04-24 22:15:18.026196] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.848 [2024-04-24 22:15:18.026534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.848 [2024-04-24 22:15:18.026566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.848 [2024-04-24 22:15:18.035034] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.848 [2024-04-24 22:15:18.035369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.848 [2024-04-24 22:15:18.035409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.848 [2024-04-24 22:15:18.044300] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.848 [2024-04-24 22:15:18.044624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.848 [2024-04-24 22:15:18.044656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.848 [2024-04-24 22:15:18.053665] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.848 [2024-04-24 22:15:18.053985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.848 [2024-04-24 22:15:18.054017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.848 [2024-04-24 22:15:18.062792] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.848 [2024-04-24 22:15:18.063177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.848 [2024-04-24 22:15:18.063217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:35.848 [2024-04-24 22:15:18.072341] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.848 [2024-04-24 22:15:18.072779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.848 [2024-04-24 22:15:18.072821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:35.848 [2024-04-24 22:15:18.081199] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.848 [2024-04-24 22:15:18.081523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.848 [2024-04-24 22:15:18.081555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:35.848 [2024-04-24 22:15:18.090642] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.848 [2024-04-24 22:15:18.091040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.848 [2024-04-24 22:15:18.091071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.848 [2024-04-24 22:15:18.099847] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:35.848 [2024-04-24 22:15:18.100200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.848 [2024-04-24 22:15:18.100232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.107 [2024-04-24 22:15:18.108885] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:36.107 [2024-04-24 22:15:18.109205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.107 [2024-04-24 22:15:18.109237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.107 [2024-04-24 22:15:18.118366] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:36.107 [2024-04-24 22:15:18.118727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.107 [2024-04-24 22:15:18.118760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.108 [2024-04-24 22:15:18.127975] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:36.108 [2024-04-24 22:15:18.128304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.108 [2024-04-24 22:15:18.128335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.108 [2024-04-24 22:15:18.137732] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:36.108 [2024-04-24 22:15:18.138110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.108 [2024-04-24 22:15:18.138142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.108 [2024-04-24 22:15:18.147656] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:36.108 [2024-04-24 22:15:18.148003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.108 [2024-04-24 22:15:18.148034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.108 [2024-04-24 22:15:18.156718] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:36.108 [2024-04-24 22:15:18.157120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.108 [2024-04-24 22:15:18.157152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.108 [2024-04-24 22:15:18.166248] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:36.108 [2024-04-24 22:15:18.166594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.108 [2024-04-24 22:15:18.166626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.108 [2024-04-24 22:15:18.175673] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:36.108 [2024-04-24 22:15:18.176041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.108 [2024-04-24 22:15:18.176073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.108 [2024-04-24 22:15:18.184829] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:36.108 [2024-04-24 22:15:18.185163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.108 [2024-04-24 22:15:18.185194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.108 [2024-04-24 22:15:18.193428] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:36.108 [2024-04-24 22:15:18.193817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.108 [2024-04-24 22:15:18.193853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.108 [2024-04-24 22:15:18.202179] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:36.108 [2024-04-24 22:15:18.202535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.108 [2024-04-24 22:15:18.202567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.108 [2024-04-24 22:15:18.211734] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:36.108 [2024-04-24 22:15:18.212056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.108 [2024-04-24 22:15:18.212088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.108 [2024-04-24 22:15:18.221935] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:36.108 [2024-04-24 22:15:18.222276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.108 [2024-04-24 22:15:18.222325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:36.108 [2024-04-24 22:15:18.231605] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:36.108 [2024-04-24 22:15:18.231944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.108 [2024-04-24 22:15:18.231976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:36.108 [2024-04-24 22:15:18.240863] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:36.108 [2024-04-24 22:15:18.241250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.108 [2024-04-24 22:15:18.241288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.108 [2024-04-24 22:15:18.250332] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6ef340) with pdu=0x2000190fef90 00:23:36.108 [2024-04-24 22:15:18.250580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.108 [2024-04-24 22:15:18.250623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.108 00:23:36.108 Latency(us) 00:23:36.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.108 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:36.108 nvme0n1 : 2.00 3281.14 410.14 0.00 0.00 4865.41 2706.39 11311.03 00:23:36.108 =================================================================================================================== 00:23:36.108 Total : 3281.14 410.14 0.00 0.00 4865.41 2706.39 11311.03 00:23:36.108 0 00:23:36.108 22:15:18 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:36.108 22:15:18 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:36.108 22:15:18 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:36.108 | .driver_specific 00:23:36.108 | .nvme_error 00:23:36.108 | .status_code 00:23:36.108 | .command_transient_transport_error' 00:23:36.108 22:15:18 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:36.675 22:15:18 -- host/digest.sh@71 -- # (( 212 > 0 )) 00:23:36.675 22:15:18 -- host/digest.sh@73 -- # killprocess 4032380 00:23:36.675 22:15:18 -- common/autotest_common.sh@936 -- # '[' -z 4032380 ']' 00:23:36.675 22:15:18 -- common/autotest_common.sh@940 -- # kill -0 4032380 00:23:36.675 22:15:18 -- common/autotest_common.sh@941 -- # uname 00:23:36.675 22:15:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:36.675 22:15:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4032380 00:23:36.675 22:15:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:36.675 22:15:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:36.675 22:15:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4032380' 00:23:36.675 killing process with pid 4032380 00:23:36.675 22:15:18 -- common/autotest_common.sh@955 -- # kill 4032380 00:23:36.675 Received shutdown signal, test time was about 2.000000 seconds 00:23:36.675 00:23:36.675 Latency(us) 00:23:36.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.675 =================================================================================================================== 00:23:36.675 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:36.675 22:15:18 -- common/autotest_common.sh@960 -- # wait 4032380 00:23:36.933 22:15:19 -- host/digest.sh@116 -- # killprocess 4030349 00:23:36.933 22:15:19 -- common/autotest_common.sh@936 -- # '[' -z 4030349 ']' 00:23:36.933 22:15:19 -- common/autotest_common.sh@940 -- # kill -0 4030349 00:23:36.933 22:15:19 -- common/autotest_common.sh@941 -- # uname 00:23:36.933 22:15:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:36.933 22:15:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4030349 00:23:36.933 22:15:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:36.933 22:15:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:36.933 22:15:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4030349' 00:23:36.933 killing process with pid 4030349 00:23:36.933 22:15:19 -- common/autotest_common.sh@955 -- # kill 4030349 00:23:36.933 [2024-04-24 22:15:19.138294] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:36.933 22:15:19 -- common/autotest_common.sh@960 -- # wait 4030349 00:23:37.191 00:23:37.191 real 0m17.645s 00:23:37.191 user 0m36.438s 00:23:37.191 sys 0m4.976s 00:23:37.191 22:15:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:37.191 22:15:19 -- common/autotest_common.sh@10 -- # set +x 00:23:37.191 ************************************ 00:23:37.191 END TEST nvmf_digest_error 00:23:37.191 ************************************ 00:23:37.450 22:15:19 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:23:37.450 22:15:19 -- host/digest.sh@150 -- # nvmftestfini 00:23:37.450 22:15:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:37.450 22:15:19 -- nvmf/common.sh@117 -- # sync 00:23:37.450 22:15:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:37.450 22:15:19 -- nvmf/common.sh@120 -- # set +e 00:23:37.450 22:15:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:37.450 22:15:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:37.450 rmmod nvme_tcp 00:23:37.450 rmmod nvme_fabrics 00:23:37.450 rmmod nvme_keyring 00:23:37.450 22:15:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:37.450 22:15:19 -- nvmf/common.sh@124 -- # set -e 00:23:37.450 22:15:19 -- nvmf/common.sh@125 -- # return 0 00:23:37.450 22:15:19 -- nvmf/common.sh@478 -- # '[' -n 4030349 ']' 00:23:37.450 22:15:19 -- nvmf/common.sh@479 -- # killprocess 4030349 00:23:37.450 22:15:19 -- common/autotest_common.sh@936 -- # '[' -z 4030349 ']' 00:23:37.450 22:15:19 -- common/autotest_common.sh@940 -- # kill -0 4030349 00:23:37.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (4030349) - No such process 00:23:37.450 22:15:19 -- common/autotest_common.sh@963 -- # echo 'Process with pid 4030349 is not found' 00:23:37.450 Process with pid 4030349 is not found 00:23:37.450 22:15:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:37.450 22:15:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:37.450 22:15:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:37.450 22:15:19 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:37.450 22:15:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:37.450 22:15:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.450 22:15:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:37.450 22:15:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.351 22:15:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:39.351 00:23:39.351 real 0m40.455s 00:23:39.351 user 1m14.323s 00:23:39.351 sys 0m11.881s 00:23:39.351 22:15:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:39.351 22:15:21 -- common/autotest_common.sh@10 -- # set +x 00:23:39.351 ************************************ 00:23:39.351 END TEST nvmf_digest 00:23:39.351 ************************************ 00:23:39.351 22:15:21 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:23:39.351 22:15:21 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:23:39.351 22:15:21 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:23:39.351 22:15:21 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:23:39.351 22:15:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:39.351 22:15:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:39.351 22:15:21 -- common/autotest_common.sh@10 -- # set +x 00:23:39.610 ************************************ 00:23:39.610 START TEST nvmf_bdevperf 00:23:39.610 ************************************ 00:23:39.610 22:15:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:23:39.610 * Looking for test storage... 00:23:39.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:39.610 22:15:21 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:39.610 22:15:21 -- nvmf/common.sh@7 -- # uname -s 00:23:39.610 22:15:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.610 22:15:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.610 22:15:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.610 22:15:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.610 22:15:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.610 22:15:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.610 22:15:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.610 22:15:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.610 22:15:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.610 22:15:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.610 22:15:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:39.610 22:15:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:39.610 22:15:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.610 22:15:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.610 22:15:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:39.610 22:15:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.610 22:15:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:39.610 22:15:21 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.610 22:15:21 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.610 22:15:21 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.610 22:15:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.610 22:15:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.610 22:15:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.610 22:15:21 -- paths/export.sh@5 -- # export PATH 00:23:39.610 22:15:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.610 22:15:21 -- nvmf/common.sh@47 -- # : 0 00:23:39.610 22:15:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:39.610 22:15:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:39.610 22:15:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.610 22:15:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.610 22:15:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.610 22:15:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:39.610 22:15:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:39.610 22:15:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:39.610 22:15:21 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:39.610 22:15:21 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:39.610 22:15:21 -- host/bdevperf.sh@24 -- # nvmftestinit 00:23:39.610 22:15:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:39.610 22:15:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.610 22:15:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:39.610 22:15:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:39.610 22:15:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:39.610 22:15:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.610 22:15:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.610 22:15:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.610 22:15:21 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:39.610 22:15:21 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:39.610 22:15:21 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:39.610 22:15:21 -- common/autotest_common.sh@10 -- # set +x 00:23:42.140 22:15:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:42.140 22:15:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:42.140 22:15:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:42.140 22:15:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:42.140 22:15:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:42.140 22:15:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:42.140 22:15:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:42.140 22:15:24 -- nvmf/common.sh@295 -- # net_devs=() 00:23:42.140 22:15:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:42.140 22:15:24 -- nvmf/common.sh@296 -- # e810=() 00:23:42.140 22:15:24 -- nvmf/common.sh@296 -- # local -ga e810 00:23:42.140 22:15:24 -- nvmf/common.sh@297 -- # x722=() 00:23:42.140 22:15:24 -- nvmf/common.sh@297 -- # local -ga x722 00:23:42.140 22:15:24 -- nvmf/common.sh@298 -- # mlx=() 00:23:42.140 22:15:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:42.140 22:15:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:42.140 22:15:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:42.140 22:15:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:42.140 22:15:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:42.140 22:15:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:42.140 22:15:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:42.140 22:15:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:42.140 22:15:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:42.140 22:15:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:42.140 22:15:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:42.140 22:15:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:42.140 22:15:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:42.140 22:15:24 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:42.140 22:15:24 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:42.140 22:15:24 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:42.140 22:15:24 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:42.140 22:15:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:42.140 22:15:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:42.141 22:15:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:42.141 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:42.141 22:15:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:42.141 22:15:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:42.141 22:15:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.141 22:15:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.141 22:15:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:42.141 22:15:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:42.141 22:15:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:42.141 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:42.141 22:15:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:42.141 22:15:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:42.141 22:15:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.141 22:15:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.141 22:15:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:42.141 22:15:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:42.141 22:15:24 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:42.141 22:15:24 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:42.141 22:15:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:42.141 22:15:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.141 22:15:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:42.141 22:15:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.141 22:15:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:42.141 Found net devices under 0000:84:00.0: cvl_0_0 00:23:42.141 22:15:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.141 22:15:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:42.141 22:15:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.141 22:15:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:42.141 22:15:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.141 22:15:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:42.141 Found net devices under 0000:84:00.1: cvl_0_1 00:23:42.141 22:15:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.141 22:15:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:42.141 22:15:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:42.141 22:15:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:42.141 22:15:24 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:42.141 22:15:24 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:42.141 22:15:24 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:42.141 22:15:24 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:42.141 22:15:24 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:42.141 22:15:24 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:42.141 22:15:24 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:42.141 22:15:24 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:42.141 22:15:24 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:42.141 22:15:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:42.141 22:15:24 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:42.141 22:15:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:42.141 22:15:24 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:42.141 22:15:24 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:42.141 22:15:24 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:42.141 22:15:24 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:42.141 22:15:24 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:42.141 22:15:24 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:42.141 22:15:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:42.141 22:15:24 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:42.141 22:15:24 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:42.141 22:15:24 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:42.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:42.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:23:42.141 00:23:42.141 --- 10.0.0.2 ping statistics --- 00:23:42.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.141 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:23:42.141 22:15:24 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:42.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:42.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:23:42.141 00:23:42.141 --- 10.0.0.1 ping statistics --- 00:23:42.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.141 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:23:42.141 22:15:24 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:42.141 22:15:24 -- nvmf/common.sh@411 -- # return 0 00:23:42.141 22:15:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:42.141 22:15:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:42.141 22:15:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:42.141 22:15:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:42.141 22:15:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:42.141 22:15:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:42.141 22:15:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:42.141 22:15:24 -- host/bdevperf.sh@25 -- # tgt_init 00:23:42.141 22:15:24 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:23:42.141 22:15:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:42.141 22:15:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:42.141 22:15:24 -- common/autotest_common.sh@10 -- # set +x 00:23:42.141 22:15:24 -- nvmf/common.sh@470 -- # nvmfpid=4034882 00:23:42.141 22:15:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:42.141 22:15:24 -- nvmf/common.sh@471 -- # waitforlisten 4034882 00:23:42.141 22:15:24 -- common/autotest_common.sh@817 -- # '[' -z 4034882 ']' 00:23:42.141 22:15:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.141 22:15:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:42.141 22:15:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.141 22:15:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:42.141 22:15:24 -- common/autotest_common.sh@10 -- # set +x 00:23:42.141 [2024-04-24 22:15:24.232119] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:23:42.141 [2024-04-24 22:15:24.232205] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.141 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.141 [2024-04-24 22:15:24.310901] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:42.399 [2024-04-24 22:15:24.436082] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.399 [2024-04-24 22:15:24.436149] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.399 [2024-04-24 22:15:24.436166] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.399 [2024-04-24 22:15:24.436180] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.399 [2024-04-24 22:15:24.436192] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.399 [2024-04-24 22:15:24.439425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.399 [2024-04-24 22:15:24.439486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:42.399 [2024-04-24 22:15:24.439490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.399 22:15:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:42.399 22:15:24 -- common/autotest_common.sh@850 -- # return 0 00:23:42.399 22:15:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:42.399 22:15:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:42.399 22:15:24 -- common/autotest_common.sh@10 -- # set +x 00:23:42.399 22:15:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:42.399 22:15:24 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:42.399 22:15:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:42.399 22:15:24 -- common/autotest_common.sh@10 -- # set +x 00:23:42.399 [2024-04-24 22:15:24.603218] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:42.399 22:15:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:42.399 22:15:24 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:42.399 22:15:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:42.399 22:15:24 -- common/autotest_common.sh@10 -- # set +x 00:23:42.399 Malloc0 00:23:42.399 22:15:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:42.399 22:15:24 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:42.399 22:15:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:42.399 22:15:24 -- common/autotest_common.sh@10 -- # set +x 00:23:42.399 22:15:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:42.399 22:15:24 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:42.399 22:15:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:42.399 22:15:24 -- common/autotest_common.sh@10 -- # set +x 00:23:42.658 22:15:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:42.658 22:15:24 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:42.658 22:15:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:42.658 22:15:24 -- common/autotest_common.sh@10 -- # set +x 00:23:42.658 [2024-04-24 22:15:24.664520] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:42.658 [2024-04-24 22:15:24.664872] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:42.658 22:15:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:42.658 22:15:24 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:23:42.658 22:15:24 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:23:42.658 22:15:24 -- nvmf/common.sh@521 -- # config=() 00:23:42.658 22:15:24 -- nvmf/common.sh@521 -- # local subsystem config 00:23:42.658 22:15:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:42.658 22:15:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:42.658 { 00:23:42.658 "params": { 00:23:42.658 "name": "Nvme$subsystem", 00:23:42.658 "trtype": "$TEST_TRANSPORT", 00:23:42.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.658 "adrfam": "ipv4", 00:23:42.658 "trsvcid": "$NVMF_PORT", 00:23:42.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.658 "hdgst": ${hdgst:-false}, 00:23:42.658 "ddgst": ${ddgst:-false} 00:23:42.658 }, 00:23:42.658 "method": "bdev_nvme_attach_controller" 00:23:42.658 } 00:23:42.658 EOF 00:23:42.658 )") 00:23:42.658 22:15:24 -- nvmf/common.sh@543 -- # cat 00:23:42.658 22:15:24 -- nvmf/common.sh@545 -- # jq . 00:23:42.658 22:15:24 -- nvmf/common.sh@546 -- # IFS=, 00:23:42.658 22:15:24 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:42.658 "params": { 00:23:42.658 "name": "Nvme1", 00:23:42.658 "trtype": "tcp", 00:23:42.658 "traddr": "10.0.0.2", 00:23:42.658 "adrfam": "ipv4", 00:23:42.658 "trsvcid": "4420", 00:23:42.658 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.658 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:42.658 "hdgst": false, 00:23:42.658 "ddgst": false 00:23:42.658 }, 00:23:42.658 "method": "bdev_nvme_attach_controller" 00:23:42.658 }' 00:23:42.658 [2024-04-24 22:15:24.715008] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:23:42.658 [2024-04-24 22:15:24.715088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4035026 ] 00:23:42.658 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.658 [2024-04-24 22:15:24.784386] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.658 [2024-04-24 22:15:24.906280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.224 Running I/O for 1 seconds... 00:23:44.159 00:23:44.159 Latency(us) 00:23:44.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.159 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:44.159 Verification LBA range: start 0x0 length 0x4000 00:23:44.159 Nvme1n1 : 1.01 7908.02 30.89 0.00 0.00 16106.01 1310.72 14175.19 00:23:44.159 =================================================================================================================== 00:23:44.159 Total : 7908.02 30.89 0.00 0.00 16106.01 1310.72 14175.19 00:23:44.451 22:15:26 -- host/bdevperf.sh@30 -- # bdevperfpid=4035168 00:23:44.451 22:15:26 -- host/bdevperf.sh@32 -- # sleep 3 00:23:44.451 22:15:26 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:23:44.451 22:15:26 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:23:44.451 22:15:26 -- nvmf/common.sh@521 -- # config=() 00:23:44.451 22:15:26 -- nvmf/common.sh@521 -- # local subsystem config 00:23:44.451 22:15:26 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:44.451 22:15:26 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:44.451 { 00:23:44.451 "params": { 00:23:44.451 "name": "Nvme$subsystem", 00:23:44.451 "trtype": "$TEST_TRANSPORT", 00:23:44.451 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.451 "adrfam": "ipv4", 00:23:44.451 "trsvcid": "$NVMF_PORT", 00:23:44.451 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.451 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.451 "hdgst": ${hdgst:-false}, 00:23:44.451 "ddgst": ${ddgst:-false} 00:23:44.451 }, 00:23:44.451 "method": "bdev_nvme_attach_controller" 00:23:44.451 } 00:23:44.451 EOF 00:23:44.451 )") 00:23:44.451 22:15:26 -- nvmf/common.sh@543 -- # cat 00:23:44.451 22:15:26 -- nvmf/common.sh@545 -- # jq . 00:23:44.451 22:15:26 -- nvmf/common.sh@546 -- # IFS=, 00:23:44.451 22:15:26 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:44.451 "params": { 00:23:44.451 "name": "Nvme1", 00:23:44.451 "trtype": "tcp", 00:23:44.451 "traddr": "10.0.0.2", 00:23:44.451 "adrfam": "ipv4", 00:23:44.451 "trsvcid": "4420", 00:23:44.451 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.451 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:44.451 "hdgst": false, 00:23:44.451 "ddgst": false 00:23:44.451 }, 00:23:44.451 "method": "bdev_nvme_attach_controller" 00:23:44.451 }' 00:23:44.451 [2024-04-24 22:15:26.565547] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:23:44.451 [2024-04-24 22:15:26.565634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4035168 ] 00:23:44.451 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.451 [2024-04-24 22:15:26.672019] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.709 [2024-04-24 22:15:26.790796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.967 Running I/O for 15 seconds... 00:23:47.500 22:15:29 -- host/bdevperf.sh@33 -- # kill -9 4034882 00:23:47.500 22:15:29 -- host/bdevperf.sh@35 -- # sleep 3 00:23:47.500 [2024-04-24 22:15:29.532138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.500 [2024-04-24 22:15:29.532190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.500 [2024-04-24 22:15:29.532224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.500 [2024-04-24 22:15:29.532244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.500 [2024-04-24 22:15:29.532265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.500 [2024-04-24 22:15:29.532281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.500 [2024-04-24 22:15:29.532299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.500 [2024-04-24 22:15:29.532314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.500 [2024-04-24 22:15:29.532332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.500 [2024-04-24 22:15:29.532348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.500 [2024-04-24 22:15:29.532365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.500 [2024-04-24 22:15:29.532381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.500 [2024-04-24 22:15:29.532405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.500 [2024-04-24 22:15:29.532423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.500 [2024-04-24 22:15:29.532440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.500 [2024-04-24 22:15:29.532456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.500 [2024-04-24 22:15:29.532473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.500 [2024-04-24 22:15:29.532498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.500 [2024-04-24 22:15:29.532516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.500 [2024-04-24 22:15:29.532533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.500 [2024-04-24 22:15:29.532552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.500 [2024-04-24 22:15:29.532569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.500 [2024-04-24 22:15:29.532587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.500 [2024-04-24 22:15:29.532603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.500 [2024-04-24 22:15:29.532620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.500 [2024-04-24 22:15:29.532635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.500 [2024-04-24 22:15:29.532652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.500 [2024-04-24 22:15:29.532667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.500 [2024-04-24 22:15:29.532684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.501 [2024-04-24 22:15:29.532699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.532716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.501 [2024-04-24 22:15:29.532731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.532747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.501 [2024-04-24 22:15:29.532762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.532779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.501 [2024-04-24 22:15:29.532794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.532811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.501 [2024-04-24 22:15:29.532826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.532844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.501 [2024-04-24 22:15:29.532860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.532877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.501 [2024-04-24 22:15:29.532893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.532914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.501 [2024-04-24 22:15:29.532930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.532947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.501 [2024-04-24 22:15:29.532962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.532979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.501 [2024-04-24 22:15:29.532995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.501 [2024-04-24 22:15:29.533026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.501 [2024-04-24 22:15:29.533058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.501 [2024-04-24 22:15:29.533090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.501 [2024-04-24 22:15:29.533122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.501 [2024-04-24 22:15:29.533155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.501 [2024-04-24 22:15:29.533186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.501 [2024-04-24 22:15:29.533219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.501 [2024-04-24 22:15:29.533251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.501 [2024-04-24 22:15:29.533282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.501 [2024-04-24 22:15:29.533314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.501 [2024-04-24 22:15:29.533352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.501 [2024-04-24 22:15:29.533384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.501 [2024-04-24 22:15:29.533426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.501 [2024-04-24 22:15:29.533458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.501 [2024-04-24 22:15:29.533490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.501 [2024-04-24 22:15:29.533522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.501 [2024-04-24 22:15:29.533553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.501 [2024-04-24 22:15:29.533585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.501 [2024-04-24 22:15:29.533618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.501 [2024-04-24 22:15:29.533649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.501 [2024-04-24 22:15:29.533683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.501 [2024-04-24 22:15:29.533715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.501 [2024-04-24 22:15:29.533752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.501 [2024-04-24 22:15:29.533785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.501 [2024-04-24 22:15:29.533816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.501 [2024-04-24 22:15:29.533849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.501 [2024-04-24 22:15:29.533882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.501 [2024-04-24 22:15:29.533915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.501 [2024-04-24 22:15:29.533948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.501 [2024-04-24 22:15:29.533980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.501 [2024-04-24 22:15:29.533996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.502 [2024-04-24 22:15:29.534011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.534043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.534076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.534109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.534140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.534177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.534209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.534242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.534274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.534306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.534338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.534370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.534410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.534445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.534478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.534510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.534543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.534579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.534612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.534644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.534676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.534709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.534741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.534773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.534806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.534839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.534871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.534904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.534937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.534970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.534990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.502 [2024-04-24 22:15:29.535006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.502 [2024-04-24 22:15:29.535024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.535056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.535090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.535122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.535155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.535187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.535220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.535252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.535284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.535317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.535349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.535381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.535426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.535460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.535494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.535526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.535558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.535590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.535622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.535654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.535687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.535719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.535751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.535783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.535815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.535853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.535885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.535918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.535949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.535982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.535998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.536015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.536029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.536046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.536060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.536078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.536092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.536109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.503 [2024-04-24 22:15:29.536124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.536141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:47.503 [2024-04-24 22:15:29.536155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.536172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.536188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.536204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.536220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.536243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.536259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.536276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.536291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.503 [2024-04-24 22:15:29.536308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.503 [2024-04-24 22:15:29.536322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.504 [2024-04-24 22:15:29.536339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.504 [2024-04-24 22:15:29.536354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.504 [2024-04-24 22:15:29.536371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.504 [2024-04-24 22:15:29.536385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.504 [2024-04-24 22:15:29.536408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bc430 is same with the state(5) to be set 00:23:47.504 [2024-04-24 22:15:29.536427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:47.504 [2024-04-24 22:15:29.536440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:47.504 [2024-04-24 22:15:29.536453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20608 len:8 PRP1 0x0 PRP2 0x0 00:23:47.504 [2024-04-24 22:15:29.536467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.504 [2024-04-24 22:15:29.536538] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18bc430 was disconnected and freed. reset controller. 00:23:47.504 [2024-04-24 22:15:29.540271] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.504 [2024-04-24 22:15:29.540347] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.504 [2024-04-24 22:15:29.541165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.504 [2024-04-24 22:15:29.541410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.504 [2024-04-24 22:15:29.541467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.504 [2024-04-24 22:15:29.541485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.504 [2024-04-24 22:15:29.541782] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.504 [2024-04-24 22:15:29.542083] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.504 [2024-04-24 22:15:29.542107] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.504 [2024-04-24 22:15:29.542126] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.504 [2024-04-24 22:15:29.546661] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.504 [2024-04-24 22:15:29.555501] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.504 [2024-04-24 22:15:29.555992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.504 [2024-04-24 22:15:29.556143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.504 [2024-04-24 22:15:29.556171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.504 [2024-04-24 22:15:29.556188] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.504 [2024-04-24 22:15:29.556494] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.504 [2024-04-24 22:15:29.556794] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.504 [2024-04-24 22:15:29.556817] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.504 [2024-04-24 22:15:29.556833] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.504 [2024-04-24 22:15:29.561375] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.504 [2024-04-24 22:15:29.570491] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.504 [2024-04-24 22:15:29.570983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.504 [2024-04-24 22:15:29.571170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.504 [2024-04-24 22:15:29.571225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.504 [2024-04-24 22:15:29.571242] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.504 [2024-04-24 22:15:29.571548] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.504 [2024-04-24 22:15:29.571847] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.504 [2024-04-24 22:15:29.571871] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.504 [2024-04-24 22:15:29.571885] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.504 [2024-04-24 22:15:29.576427] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.504 [2024-04-24 22:15:29.585536] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.504 [2024-04-24 22:15:29.586022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.504 [2024-04-24 22:15:29.586174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.504 [2024-04-24 22:15:29.586202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.504 [2024-04-24 22:15:29.586218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.504 [2024-04-24 22:15:29.586523] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.504 [2024-04-24 22:15:29.586822] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.504 [2024-04-24 22:15:29.586845] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.504 [2024-04-24 22:15:29.586860] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.504 [2024-04-24 22:15:29.591390] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.504 [2024-04-24 22:15:29.600500] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.504 [2024-04-24 22:15:29.600979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.504 [2024-04-24 22:15:29.601138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.504 [2024-04-24 22:15:29.601171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.504 [2024-04-24 22:15:29.601189] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.504 [2024-04-24 22:15:29.601495] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.504 [2024-04-24 22:15:29.601794] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.504 [2024-04-24 22:15:29.601818] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.504 [2024-04-24 22:15:29.601833] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.504 [2024-04-24 22:15:29.606365] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.504 [2024-04-24 22:15:29.615473] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.504 [2024-04-24 22:15:29.615952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.504 [2024-04-24 22:15:29.616144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.504 [2024-04-24 22:15:29.616172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.504 [2024-04-24 22:15:29.616189] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.504 [2024-04-24 22:15:29.616497] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.504 [2024-04-24 22:15:29.616796] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.504 [2024-04-24 22:15:29.616819] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.504 [2024-04-24 22:15:29.616834] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.504 [2024-04-24 22:15:29.621370] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.504 [2024-04-24 22:15:29.630473] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.504 [2024-04-24 22:15:29.630966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.504 [2024-04-24 22:15:29.631138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.504 [2024-04-24 22:15:29.631165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.504 [2024-04-24 22:15:29.631182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.504 [2024-04-24 22:15:29.631491] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.504 [2024-04-24 22:15:29.631790] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.504 [2024-04-24 22:15:29.631814] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.504 [2024-04-24 22:15:29.631829] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.504 [2024-04-24 22:15:29.636356] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.504 [2024-04-24 22:15:29.645467] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.504 [2024-04-24 22:15:29.645936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.504 [2024-04-24 22:15:29.646151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.504 [2024-04-24 22:15:29.646180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.504 [2024-04-24 22:15:29.646204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.504 [2024-04-24 22:15:29.646512] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.504 [2024-04-24 22:15:29.646811] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.505 [2024-04-24 22:15:29.646834] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.505 [2024-04-24 22:15:29.646849] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.505 [2024-04-24 22:15:29.651373] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.505 [2024-04-24 22:15:29.660467] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.505 [2024-04-24 22:15:29.660943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.505 [2024-04-24 22:15:29.661147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.505 [2024-04-24 22:15:29.661175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.505 [2024-04-24 22:15:29.661192] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.505 [2024-04-24 22:15:29.661495] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.505 [2024-04-24 22:15:29.661794] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.505 [2024-04-24 22:15:29.661817] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.505 [2024-04-24 22:15:29.661833] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.505 [2024-04-24 22:15:29.666358] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.505 [2024-04-24 22:15:29.675447] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.505 [2024-04-24 22:15:29.675920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.505 [2024-04-24 22:15:29.676094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.505 [2024-04-24 22:15:29.676122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.505 [2024-04-24 22:15:29.676139] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.505 [2024-04-24 22:15:29.676458] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.505 [2024-04-24 22:15:29.676757] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.505 [2024-04-24 22:15:29.676781] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.505 [2024-04-24 22:15:29.676797] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.505 [2024-04-24 22:15:29.681324] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.505 [2024-04-24 22:15:29.690428] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.505 [2024-04-24 22:15:29.690925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.505 [2024-04-24 22:15:29.691080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.505 [2024-04-24 22:15:29.691108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.505 [2024-04-24 22:15:29.691124] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.505 [2024-04-24 22:15:29.691433] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.505 [2024-04-24 22:15:29.691732] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.505 [2024-04-24 22:15:29.691755] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.505 [2024-04-24 22:15:29.691769] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.505 [2024-04-24 22:15:29.696297] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.505 [2024-04-24 22:15:29.705386] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.505 [2024-04-24 22:15:29.705898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.505 [2024-04-24 22:15:29.706091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.505 [2024-04-24 22:15:29.706119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.505 [2024-04-24 22:15:29.706136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.505 [2024-04-24 22:15:29.706441] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.505 [2024-04-24 22:15:29.706739] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.505 [2024-04-24 22:15:29.706762] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.505 [2024-04-24 22:15:29.706777] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.505 [2024-04-24 22:15:29.711303] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.505 [2024-04-24 22:15:29.720414] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.505 [2024-04-24 22:15:29.720919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.505 [2024-04-24 22:15:29.721134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.505 [2024-04-24 22:15:29.721161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.505 [2024-04-24 22:15:29.721178] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.505 [2024-04-24 22:15:29.721484] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.505 [2024-04-24 22:15:29.721783] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.506 [2024-04-24 22:15:29.721806] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.506 [2024-04-24 22:15:29.721821] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.506 [2024-04-24 22:15:29.726348] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.506 [2024-04-24 22:15:29.735441] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.506 [2024-04-24 22:15:29.735936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.506 [2024-04-24 22:15:29.736079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.506 [2024-04-24 22:15:29.736107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.506 [2024-04-24 22:15:29.736124] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.506 [2024-04-24 22:15:29.736428] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.506 [2024-04-24 22:15:29.736732] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.506 [2024-04-24 22:15:29.736756] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.506 [2024-04-24 22:15:29.736771] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.506 [2024-04-24 22:15:29.741295] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.506 [2024-04-24 22:15:29.750386] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.506 [2024-04-24 22:15:29.750881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.506 [2024-04-24 22:15:29.751028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.506 [2024-04-24 22:15:29.751056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.506 [2024-04-24 22:15:29.751074] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.506 [2024-04-24 22:15:29.751368] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.506 [2024-04-24 22:15:29.751675] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.506 [2024-04-24 22:15:29.751699] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.506 [2024-04-24 22:15:29.751714] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.766 [2024-04-24 22:15:29.756240] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.766 [2024-04-24 22:15:29.765325] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.766 [2024-04-24 22:15:29.765831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.766 [2024-04-24 22:15:29.766043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.766 [2024-04-24 22:15:29.766072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.766 [2024-04-24 22:15:29.766088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.766 [2024-04-24 22:15:29.766383] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.766 [2024-04-24 22:15:29.766690] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.766 [2024-04-24 22:15:29.766714] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.766 [2024-04-24 22:15:29.766729] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.766 [2024-04-24 22:15:29.771255] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.766 [2024-04-24 22:15:29.780345] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.766 [2024-04-24 22:15:29.780845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.766 [2024-04-24 22:15:29.781011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.766 [2024-04-24 22:15:29.781038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.766 [2024-04-24 22:15:29.781055] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.766 [2024-04-24 22:15:29.781349] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.766 [2024-04-24 22:15:29.781663] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.766 [2024-04-24 22:15:29.781692] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.766 [2024-04-24 22:15:29.781708] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.766 [2024-04-24 22:15:29.786252] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.766 [2024-04-24 22:15:29.795538] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.766 [2024-04-24 22:15:29.796033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.766 [2024-04-24 22:15:29.796181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.766 [2024-04-24 22:15:29.796210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.766 [2024-04-24 22:15:29.796227] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.766 [2024-04-24 22:15:29.796536] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.766 [2024-04-24 22:15:29.796835] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.766 [2024-04-24 22:15:29.796861] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.766 [2024-04-24 22:15:29.796878] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.766 [2024-04-24 22:15:29.801493] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.766 [2024-04-24 22:15:29.810623] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.766 [2024-04-24 22:15:29.811127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.766 [2024-04-24 22:15:29.811337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.766 [2024-04-24 22:15:29.811365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.766 [2024-04-24 22:15:29.811382] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.766 [2024-04-24 22:15:29.811684] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.766 [2024-04-24 22:15:29.811983] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.766 [2024-04-24 22:15:29.812006] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.766 [2024-04-24 22:15:29.812020] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.766 [2024-04-24 22:15:29.816560] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.766 [2024-04-24 22:15:29.825645] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.766 [2024-04-24 22:15:29.826129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.766 [2024-04-24 22:15:29.826316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.766 [2024-04-24 22:15:29.826343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.766 [2024-04-24 22:15:29.826360] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.766 [2024-04-24 22:15:29.826665] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.766 [2024-04-24 22:15:29.826964] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.766 [2024-04-24 22:15:29.826988] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.766 [2024-04-24 22:15:29.827008] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.766 [2024-04-24 22:15:29.831547] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.766 [2024-04-24 22:15:29.840627] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.766 [2024-04-24 22:15:29.841107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.766 [2024-04-24 22:15:29.841271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.766 [2024-04-24 22:15:29.841299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.766 [2024-04-24 22:15:29.841316] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.766 [2024-04-24 22:15:29.841621] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.766 [2024-04-24 22:15:29.841920] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.766 [2024-04-24 22:15:29.841943] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.766 [2024-04-24 22:15:29.841959] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.766 [2024-04-24 22:15:29.846497] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.766 [2024-04-24 22:15:29.855595] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.766 [2024-04-24 22:15:29.856086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.766 [2024-04-24 22:15:29.856256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.766 [2024-04-24 22:15:29.856284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.766 [2024-04-24 22:15:29.856301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.766 [2024-04-24 22:15:29.856607] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.766 [2024-04-24 22:15:29.856906] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.766 [2024-04-24 22:15:29.856929] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.767 [2024-04-24 22:15:29.856944] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.767 [2024-04-24 22:15:29.861481] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.767 [2024-04-24 22:15:29.870563] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.767 [2024-04-24 22:15:29.871047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.767 [2024-04-24 22:15:29.871236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.767 [2024-04-24 22:15:29.871264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.767 [2024-04-24 22:15:29.871281] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.767 [2024-04-24 22:15:29.871587] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.767 [2024-04-24 22:15:29.871886] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.767 [2024-04-24 22:15:29.871909] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.767 [2024-04-24 22:15:29.871924] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.767 [2024-04-24 22:15:29.876472] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.767 [2024-04-24 22:15:29.885554] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.767 [2024-04-24 22:15:29.886035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.767 [2024-04-24 22:15:29.886224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.767 [2024-04-24 22:15:29.886252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.767 [2024-04-24 22:15:29.886269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.767 [2024-04-24 22:15:29.886574] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.767 [2024-04-24 22:15:29.886873] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.767 [2024-04-24 22:15:29.886896] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.767 [2024-04-24 22:15:29.886911] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.767 [2024-04-24 22:15:29.891447] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.767 [2024-04-24 22:15:29.900531] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.767 [2024-04-24 22:15:29.901023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.767 [2024-04-24 22:15:29.901156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.767 [2024-04-24 22:15:29.901184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.767 [2024-04-24 22:15:29.901200] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.767 [2024-04-24 22:15:29.901506] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.767 [2024-04-24 22:15:29.901804] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.767 [2024-04-24 22:15:29.901828] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.767 [2024-04-24 22:15:29.901843] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.767 [2024-04-24 22:15:29.906370] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.767 [2024-04-24 22:15:29.915464] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.767 [2024-04-24 22:15:29.915945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.767 [2024-04-24 22:15:29.916112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.767 [2024-04-24 22:15:29.916140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.767 [2024-04-24 22:15:29.916157] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.767 [2024-04-24 22:15:29.916462] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.767 [2024-04-24 22:15:29.916761] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.767 [2024-04-24 22:15:29.916784] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.767 [2024-04-24 22:15:29.916800] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.767 [2024-04-24 22:15:29.921323] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.767 [2024-04-24 22:15:29.930426] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.767 [2024-04-24 22:15:29.930909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.767 [2024-04-24 22:15:29.931100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.767 [2024-04-24 22:15:29.931128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.767 [2024-04-24 22:15:29.931145] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.767 [2024-04-24 22:15:29.931450] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.767 [2024-04-24 22:15:29.931749] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.767 [2024-04-24 22:15:29.931772] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.767 [2024-04-24 22:15:29.931788] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.767 [2024-04-24 22:15:29.936317] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.767 [2024-04-24 22:15:29.945405] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.767 [2024-04-24 22:15:29.945883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.767 [2024-04-24 22:15:29.946024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.767 [2024-04-24 22:15:29.946052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.767 [2024-04-24 22:15:29.946068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.767 [2024-04-24 22:15:29.946362] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.767 [2024-04-24 22:15:29.946671] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.767 [2024-04-24 22:15:29.946695] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.767 [2024-04-24 22:15:29.946710] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.767 [2024-04-24 22:15:29.951237] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.767 [2024-04-24 22:15:29.960334] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.767 [2024-04-24 22:15:29.960800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.767 [2024-04-24 22:15:29.960993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.767 [2024-04-24 22:15:29.961021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.767 [2024-04-24 22:15:29.961038] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.767 [2024-04-24 22:15:29.961332] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.767 [2024-04-24 22:15:29.961642] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.767 [2024-04-24 22:15:29.961666] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.767 [2024-04-24 22:15:29.961681] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.767 [2024-04-24 22:15:29.966209] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.767 [2024-04-24 22:15:29.975295] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.767 [2024-04-24 22:15:29.975788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.767 [2024-04-24 22:15:29.975984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.767 [2024-04-24 22:15:29.976011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.767 [2024-04-24 22:15:29.976028] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.767 [2024-04-24 22:15:29.976323] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.767 [2024-04-24 22:15:29.976633] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.767 [2024-04-24 22:15:29.976657] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.767 [2024-04-24 22:15:29.976672] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.767 [2024-04-24 22:15:29.981198] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.767 [2024-04-24 22:15:29.990287] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.767 [2024-04-24 22:15:29.990775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.767 [2024-04-24 22:15:29.990963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.767 [2024-04-24 22:15:29.990990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.767 [2024-04-24 22:15:29.991007] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.767 [2024-04-24 22:15:29.991301] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.767 [2024-04-24 22:15:29.991612] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.767 [2024-04-24 22:15:29.991636] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.767 [2024-04-24 22:15:29.991651] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.767 [2024-04-24 22:15:29.996178] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.767 [2024-04-24 22:15:30.005283] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.767 [2024-04-24 22:15:30.005774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.767 [2024-04-24 22:15:30.005926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.768 [2024-04-24 22:15:30.005954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:47.768 [2024-04-24 22:15:30.005971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:47.768 [2024-04-24 22:15:30.006266] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:47.768 [2024-04-24 22:15:30.006575] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.768 [2024-04-24 22:15:30.006599] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.768 [2024-04-24 22:15:30.006615] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.768 [2024-04-24 22:15:30.011139] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.768 [2024-04-24 22:15:30.020600] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.027 [2024-04-24 22:15:30.021125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.027 [2024-04-24 22:15:30.021313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.027 [2024-04-24 22:15:30.021344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.027 [2024-04-24 22:15:30.021364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.027 [2024-04-24 22:15:30.021670] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.027 [2024-04-24 22:15:30.021973] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.027 [2024-04-24 22:15:30.021999] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.027 [2024-04-24 22:15:30.022017] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.027 [2024-04-24 22:15:30.026559] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.027 [2024-04-24 22:15:30.035643] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.027 [2024-04-24 22:15:30.036128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.027 [2024-04-24 22:15:30.036297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.027 [2024-04-24 22:15:30.036325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.027 [2024-04-24 22:15:30.036342] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.027 [2024-04-24 22:15:30.036644] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.027 [2024-04-24 22:15:30.036944] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.027 [2024-04-24 22:15:30.036967] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.027 [2024-04-24 22:15:30.036983] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.027 [2024-04-24 22:15:30.041602] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.027 [2024-04-24 22:15:30.050714] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.027 [2024-04-24 22:15:30.051187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.027 [2024-04-24 22:15:30.051327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.027 [2024-04-24 22:15:30.051355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.027 [2024-04-24 22:15:30.051372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.027 [2024-04-24 22:15:30.051674] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.027 [2024-04-24 22:15:30.051973] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.027 [2024-04-24 22:15:30.051996] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.027 [2024-04-24 22:15:30.052011] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.027 [2024-04-24 22:15:30.056545] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.027 [2024-04-24 22:15:30.065636] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.027 [2024-04-24 22:15:30.066102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.027 [2024-04-24 22:15:30.066304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.027 [2024-04-24 22:15:30.066332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.027 [2024-04-24 22:15:30.066357] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.027 [2024-04-24 22:15:30.066662] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.027 [2024-04-24 22:15:30.066961] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.027 [2024-04-24 22:15:30.066984] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.027 [2024-04-24 22:15:30.066999] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.027 [2024-04-24 22:15:30.071538] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.027 [2024-04-24 22:15:30.080627] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.027 [2024-04-24 22:15:30.081104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.027 [2024-04-24 22:15:30.081286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.027 [2024-04-24 22:15:30.081315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.027 [2024-04-24 22:15:30.081332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.027 [2024-04-24 22:15:30.081646] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.027 [2024-04-24 22:15:30.081946] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.027 [2024-04-24 22:15:30.081970] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.027 [2024-04-24 22:15:30.081985] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.027 [2024-04-24 22:15:30.086525] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.027 [2024-04-24 22:15:30.095634] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.027 [2024-04-24 22:15:30.096133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.027 [2024-04-24 22:15:30.096299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.027 [2024-04-24 22:15:30.096327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.027 [2024-04-24 22:15:30.096344] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.027 [2024-04-24 22:15:30.096647] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.028 [2024-04-24 22:15:30.096946] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.028 [2024-04-24 22:15:30.096970] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.028 [2024-04-24 22:15:30.096985] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.028 [2024-04-24 22:15:30.101518] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.028 [2024-04-24 22:15:30.110608] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.028 [2024-04-24 22:15:30.111070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.028 [2024-04-24 22:15:30.111279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.028 [2024-04-24 22:15:30.111306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.028 [2024-04-24 22:15:30.111323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.028 [2024-04-24 22:15:30.111634] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.028 [2024-04-24 22:15:30.111933] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.028 [2024-04-24 22:15:30.111956] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.028 [2024-04-24 22:15:30.111970] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.028 [2024-04-24 22:15:30.116507] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.028 [2024-04-24 22:15:30.125600] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.028 [2024-04-24 22:15:30.126104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.028 [2024-04-24 22:15:30.126257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.028 [2024-04-24 22:15:30.126285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.028 [2024-04-24 22:15:30.126302] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.028 [2024-04-24 22:15:30.126604] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.028 [2024-04-24 22:15:30.126904] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.028 [2024-04-24 22:15:30.126928] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.028 [2024-04-24 22:15:30.126942] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.028 [2024-04-24 22:15:30.131483] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.028 [2024-04-24 22:15:30.140572] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.028 [2024-04-24 22:15:30.141027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.028 [2024-04-24 22:15:30.141190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.028 [2024-04-24 22:15:30.141218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.028 [2024-04-24 22:15:30.141235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.028 [2024-04-24 22:15:30.141539] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.028 [2024-04-24 22:15:30.141838] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.028 [2024-04-24 22:15:30.141861] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.028 [2024-04-24 22:15:30.141876] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.028 [2024-04-24 22:15:30.146414] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.028 [2024-04-24 22:15:30.155505] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.028 [2024-04-24 22:15:30.155984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.028 [2024-04-24 22:15:30.156149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.028 [2024-04-24 22:15:30.156177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.028 [2024-04-24 22:15:30.156194] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.028 [2024-04-24 22:15:30.156497] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.028 [2024-04-24 22:15:30.156805] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.028 [2024-04-24 22:15:30.156829] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.028 [2024-04-24 22:15:30.156844] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.028 [2024-04-24 22:15:30.161375] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.028 [2024-04-24 22:15:30.170480] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.028 [2024-04-24 22:15:30.170928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.028 [2024-04-24 22:15:30.171118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.028 [2024-04-24 22:15:30.171146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.028 [2024-04-24 22:15:30.171162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.028 [2024-04-24 22:15:30.171468] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.028 [2024-04-24 22:15:30.171767] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.028 [2024-04-24 22:15:30.171791] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.028 [2024-04-24 22:15:30.171805] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.028 [2024-04-24 22:15:30.176331] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.028 [2024-04-24 22:15:30.185447] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.028 [2024-04-24 22:15:30.185957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.028 [2024-04-24 22:15:30.186165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.028 [2024-04-24 22:15:30.186193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.028 [2024-04-24 22:15:30.186210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.028 [2024-04-24 22:15:30.186514] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.028 [2024-04-24 22:15:30.186813] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.028 [2024-04-24 22:15:30.186837] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.028 [2024-04-24 22:15:30.186852] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.028 [2024-04-24 22:15:30.191381] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.028 [2024-04-24 22:15:30.200488] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.028 [2024-04-24 22:15:30.201032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.028 [2024-04-24 22:15:30.201278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.028 [2024-04-24 22:15:30.201306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.028 [2024-04-24 22:15:30.201323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.028 [2024-04-24 22:15:30.201625] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.028 [2024-04-24 22:15:30.201924] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.028 [2024-04-24 22:15:30.201953] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.028 [2024-04-24 22:15:30.201969] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.028 [2024-04-24 22:15:30.206505] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.028 [2024-04-24 22:15:30.215593] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.028 [2024-04-24 22:15:30.216126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.028 [2024-04-24 22:15:30.216339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.028 [2024-04-24 22:15:30.216367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.028 [2024-04-24 22:15:30.216384] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.028 [2024-04-24 22:15:30.216691] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.028 [2024-04-24 22:15:30.216990] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.028 [2024-04-24 22:15:30.217013] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.028 [2024-04-24 22:15:30.217028] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.028 [2024-04-24 22:15:30.221562] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.028 [2024-04-24 22:15:30.230668] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.028 [2024-04-24 22:15:30.231335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.028 [2024-04-24 22:15:30.231634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.028 [2024-04-24 22:15:30.231673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.028 [2024-04-24 22:15:30.231692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.028 [2024-04-24 22:15:30.231994] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.028 [2024-04-24 22:15:30.232294] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.028 [2024-04-24 22:15:30.232317] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.028 [2024-04-24 22:15:30.232332] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.028 [2024-04-24 22:15:30.236891] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.028 [2024-04-24 22:15:30.245729] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.029 [2024-04-24 22:15:30.246299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.029 [2024-04-24 22:15:30.246586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.029 [2024-04-24 22:15:30.246617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.029 [2024-04-24 22:15:30.246634] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.029 [2024-04-24 22:15:30.246928] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.029 [2024-04-24 22:15:30.247227] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.029 [2024-04-24 22:15:30.247250] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.029 [2024-04-24 22:15:30.247272] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.029 [2024-04-24 22:15:30.251822] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.029 [2024-04-24 22:15:30.260676] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.029 [2024-04-24 22:15:30.261223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.029 [2024-04-24 22:15:30.261480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.029 [2024-04-24 22:15:30.261511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.029 [2024-04-24 22:15:30.261529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.029 [2024-04-24 22:15:30.261824] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.029 [2024-04-24 22:15:30.262123] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.029 [2024-04-24 22:15:30.262146] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.029 [2024-04-24 22:15:30.262161] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.029 [2024-04-24 22:15:30.266708] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.029 [2024-04-24 22:15:30.275539] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.029 [2024-04-24 22:15:30.276195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.029 [2024-04-24 22:15:30.276469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.029 [2024-04-24 22:15:30.276501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.029 [2024-04-24 22:15:30.276520] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.029 [2024-04-24 22:15:30.276820] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.029 [2024-04-24 22:15:30.277120] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.029 [2024-04-24 22:15:30.277144] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.029 [2024-04-24 22:15:30.277159] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.288 [2024-04-24 22:15:30.281713] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.288 [2024-04-24 22:15:30.290557] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.288 [2024-04-24 22:15:30.291113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.288 [2024-04-24 22:15:30.291381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.288 [2024-04-24 22:15:30.291423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.288 [2024-04-24 22:15:30.291441] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.288 [2024-04-24 22:15:30.291735] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.288 [2024-04-24 22:15:30.292041] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.288 [2024-04-24 22:15:30.292065] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.288 [2024-04-24 22:15:30.292080] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.288 [2024-04-24 22:15:30.296702] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.288 [2024-04-24 22:15:30.305618] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.288 [2024-04-24 22:15:30.306152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.288 [2024-04-24 22:15:30.306340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.288 [2024-04-24 22:15:30.306369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.288 [2024-04-24 22:15:30.306386] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.288 [2024-04-24 22:15:30.306691] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.288 [2024-04-24 22:15:30.306990] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.288 [2024-04-24 22:15:30.307013] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.288 [2024-04-24 22:15:30.307028] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.288 [2024-04-24 22:15:30.311574] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.288 [2024-04-24 22:15:30.320673] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.288 [2024-04-24 22:15:30.321185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.288 [2024-04-24 22:15:30.321384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.288 [2024-04-24 22:15:30.321424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.288 [2024-04-24 22:15:30.321442] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.288 [2024-04-24 22:15:30.321736] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.288 [2024-04-24 22:15:30.322044] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.288 [2024-04-24 22:15:30.322067] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.288 [2024-04-24 22:15:30.322081] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.288 [2024-04-24 22:15:30.326630] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.288 [2024-04-24 22:15:30.335732] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.288 [2024-04-24 22:15:30.336276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.288 [2024-04-24 22:15:30.336505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.288 [2024-04-24 22:15:30.336534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.288 [2024-04-24 22:15:30.336551] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.288 [2024-04-24 22:15:30.336846] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.288 [2024-04-24 22:15:30.337145] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.288 [2024-04-24 22:15:30.337168] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.288 [2024-04-24 22:15:30.337183] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.288 [2024-04-24 22:15:30.341729] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.289 [2024-04-24 22:15:30.350833] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.289 [2024-04-24 22:15:30.351341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.289 [2024-04-24 22:15:30.351564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.289 [2024-04-24 22:15:30.351593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.289 [2024-04-24 22:15:30.351610] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.289 [2024-04-24 22:15:30.351904] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.289 [2024-04-24 22:15:30.352203] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.289 [2024-04-24 22:15:30.352226] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.289 [2024-04-24 22:15:30.352241] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.289 [2024-04-24 22:15:30.356786] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.289 [2024-04-24 22:15:30.365902] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.289 [2024-04-24 22:15:30.366435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.289 [2024-04-24 22:15:30.366696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.289 [2024-04-24 22:15:30.366724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.289 [2024-04-24 22:15:30.366742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.289 [2024-04-24 22:15:30.367036] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.289 [2024-04-24 22:15:30.367335] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.289 [2024-04-24 22:15:30.367358] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.289 [2024-04-24 22:15:30.367372] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.289 [2024-04-24 22:15:30.371920] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.289 [2024-04-24 22:15:30.381022] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.289 [2024-04-24 22:15:30.381536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.289 [2024-04-24 22:15:30.381786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.289 [2024-04-24 22:15:30.381814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.289 [2024-04-24 22:15:30.381831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.289 [2024-04-24 22:15:30.382125] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.289 [2024-04-24 22:15:30.382444] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.289 [2024-04-24 22:15:30.382468] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.289 [2024-04-24 22:15:30.382483] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.289 [2024-04-24 22:15:30.387017] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.289 [2024-04-24 22:15:30.396123] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.289 [2024-04-24 22:15:30.396689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.289 [2024-04-24 22:15:30.396929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.289 [2024-04-24 22:15:30.396957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.289 [2024-04-24 22:15:30.396974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.289 [2024-04-24 22:15:30.397268] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.289 [2024-04-24 22:15:30.397581] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.289 [2024-04-24 22:15:30.397605] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.289 [2024-04-24 22:15:30.397620] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.289 [2024-04-24 22:15:30.402156] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.289 [2024-04-24 22:15:30.410994] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.289 [2024-04-24 22:15:30.411544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.289 [2024-04-24 22:15:30.411775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.289 [2024-04-24 22:15:30.411803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.289 [2024-04-24 22:15:30.411820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.289 [2024-04-24 22:15:30.412114] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.289 [2024-04-24 22:15:30.412428] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.289 [2024-04-24 22:15:30.412452] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.289 [2024-04-24 22:15:30.412467] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.289 [2024-04-24 22:15:30.416995] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.289 [2024-04-24 22:15:30.426084] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.289 [2024-04-24 22:15:30.426651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.289 [2024-04-24 22:15:30.426874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.289 [2024-04-24 22:15:30.426902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.289 [2024-04-24 22:15:30.426918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.289 [2024-04-24 22:15:30.427213] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.289 [2024-04-24 22:15:30.427525] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.289 [2024-04-24 22:15:30.427549] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.289 [2024-04-24 22:15:30.427563] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.289 [2024-04-24 22:15:30.432098] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.289 [2024-04-24 22:15:30.441186] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.289 [2024-04-24 22:15:30.441736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.289 [2024-04-24 22:15:30.441995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.289 [2024-04-24 22:15:30.442023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.289 [2024-04-24 22:15:30.442040] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.289 [2024-04-24 22:15:30.442334] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.289 [2024-04-24 22:15:30.442645] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.289 [2024-04-24 22:15:30.442668] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.289 [2024-04-24 22:15:30.442683] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.289 [2024-04-24 22:15:30.447262] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.289 [2024-04-24 22:15:30.456099] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.289 [2024-04-24 22:15:30.456708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.289 [2024-04-24 22:15:30.456980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.289 [2024-04-24 22:15:30.457008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.289 [2024-04-24 22:15:30.457025] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.289 [2024-04-24 22:15:30.457319] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.289 [2024-04-24 22:15:30.457630] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.289 [2024-04-24 22:15:30.457655] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.289 [2024-04-24 22:15:30.457670] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.289 [2024-04-24 22:15:30.462202] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.289 [2024-04-24 22:15:30.471030] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.289 [2024-04-24 22:15:30.471589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.289 [2024-04-24 22:15:30.471828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.289 [2024-04-24 22:15:30.471856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.289 [2024-04-24 22:15:30.471873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.289 [2024-04-24 22:15:30.472167] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.289 [2024-04-24 22:15:30.472480] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.289 [2024-04-24 22:15:30.472504] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.289 [2024-04-24 22:15:30.472519] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.289 [2024-04-24 22:15:30.477056] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.289 [2024-04-24 22:15:30.486160] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.289 [2024-04-24 22:15:30.486711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.289 [2024-04-24 22:15:30.486905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.289 [2024-04-24 22:15:30.486933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.290 [2024-04-24 22:15:30.486955] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.290 [2024-04-24 22:15:30.487250] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.290 [2024-04-24 22:15:30.487564] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.290 [2024-04-24 22:15:30.487588] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.290 [2024-04-24 22:15:30.487604] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.290 [2024-04-24 22:15:30.492135] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.290 [2024-04-24 22:15:30.501230] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.290 [2024-04-24 22:15:30.501764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.290 [2024-04-24 22:15:30.501975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.290 [2024-04-24 22:15:30.502003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.290 [2024-04-24 22:15:30.502019] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.290 [2024-04-24 22:15:30.502313] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.290 [2024-04-24 22:15:30.502625] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.290 [2024-04-24 22:15:30.502649] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.290 [2024-04-24 22:15:30.502664] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.290 [2024-04-24 22:15:30.507195] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.290 [2024-04-24 22:15:30.516304] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.290 [2024-04-24 22:15:30.516864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.290 [2024-04-24 22:15:30.517053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.290 [2024-04-24 22:15:30.517082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.290 [2024-04-24 22:15:30.517098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.290 [2024-04-24 22:15:30.517405] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.290 [2024-04-24 22:15:30.517704] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.290 [2024-04-24 22:15:30.517727] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.290 [2024-04-24 22:15:30.517742] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.290 [2024-04-24 22:15:30.522270] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.290 [2024-04-24 22:15:30.531358] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.290 [2024-04-24 22:15:30.531919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.290 [2024-04-24 22:15:30.532150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.290 [2024-04-24 22:15:30.532182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.290 [2024-04-24 22:15:30.532199] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.290 [2024-04-24 22:15:30.532512] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.290 [2024-04-24 22:15:30.532810] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.290 [2024-04-24 22:15:30.532833] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.290 [2024-04-24 22:15:30.532848] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.290 [2024-04-24 22:15:30.537378] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.549 [2024-04-24 22:15:30.546209] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.549 [2024-04-24 22:15:30.546756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.549 [2024-04-24 22:15:30.547003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.549 [2024-04-24 22:15:30.547031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.549 [2024-04-24 22:15:30.547048] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.549 [2024-04-24 22:15:30.547342] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.549 [2024-04-24 22:15:30.547658] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.549 [2024-04-24 22:15:30.547682] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.549 [2024-04-24 22:15:30.547697] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.549 [2024-04-24 22:15:30.552290] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.549 [2024-04-24 22:15:30.561183] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.549 [2024-04-24 22:15:30.561738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.549 [2024-04-24 22:15:30.561978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.549 [2024-04-24 22:15:30.562006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.549 [2024-04-24 22:15:30.562023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.549 [2024-04-24 22:15:30.562317] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.549 [2024-04-24 22:15:30.562628] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.550 [2024-04-24 22:15:30.562652] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.550 [2024-04-24 22:15:30.562667] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.550 [2024-04-24 22:15:30.567446] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.550 [2024-04-24 22:15:30.576286] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.550 [2024-04-24 22:15:30.576846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.550 [2024-04-24 22:15:30.577109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.550 [2024-04-24 22:15:30.577137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.550 [2024-04-24 22:15:30.577154] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.550 [2024-04-24 22:15:30.577460] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.550 [2024-04-24 22:15:30.577767] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.550 [2024-04-24 22:15:30.577791] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.550 [2024-04-24 22:15:30.577805] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.550 [2024-04-24 22:15:30.582340] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.550 [2024-04-24 22:15:30.591189] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.550 [2024-04-24 22:15:30.591745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.550 [2024-04-24 22:15:30.591995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.550 [2024-04-24 22:15:30.592023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.550 [2024-04-24 22:15:30.592040] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.550 [2024-04-24 22:15:30.592334] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.550 [2024-04-24 22:15:30.592643] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.550 [2024-04-24 22:15:30.592667] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.550 [2024-04-24 22:15:30.592682] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.550 [2024-04-24 22:15:30.597216] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.550 [2024-04-24 22:15:30.606056] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.550 [2024-04-24 22:15:30.606581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.550 [2024-04-24 22:15:30.606757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.550 [2024-04-24 22:15:30.606785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.550 [2024-04-24 22:15:30.606802] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.550 [2024-04-24 22:15:30.607096] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.550 [2024-04-24 22:15:30.607408] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.550 [2024-04-24 22:15:30.607432] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.550 [2024-04-24 22:15:30.607447] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.550 [2024-04-24 22:15:30.611976] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.550 [2024-04-24 22:15:30.621068] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.550 [2024-04-24 22:15:30.621597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.550 [2024-04-24 22:15:30.621771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.550 [2024-04-24 22:15:30.621799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.550 [2024-04-24 22:15:30.621816] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.550 [2024-04-24 22:15:30.622110] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.550 [2024-04-24 22:15:30.622424] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.550 [2024-04-24 22:15:30.622453] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.550 [2024-04-24 22:15:30.622469] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.550 [2024-04-24 22:15:30.627002] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.550 [2024-04-24 22:15:30.636089] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.550 [2024-04-24 22:15:30.636618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.550 [2024-04-24 22:15:30.636784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.550 [2024-04-24 22:15:30.636812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.550 [2024-04-24 22:15:30.636829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.550 [2024-04-24 22:15:30.637122] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.550 [2024-04-24 22:15:30.637436] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.550 [2024-04-24 22:15:30.637460] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.550 [2024-04-24 22:15:30.637475] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.550 [2024-04-24 22:15:30.642005] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.550 [2024-04-24 22:15:30.651095] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.550 [2024-04-24 22:15:30.651633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.550 [2024-04-24 22:15:30.651863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.550 [2024-04-24 22:15:30.651891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.550 [2024-04-24 22:15:30.651908] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.550 [2024-04-24 22:15:30.652202] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.550 [2024-04-24 22:15:30.652514] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.550 [2024-04-24 22:15:30.652538] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.550 [2024-04-24 22:15:30.652553] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.550 [2024-04-24 22:15:30.657083] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.550 [2024-04-24 22:15:30.666173] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.550 [2024-04-24 22:15:30.666718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.550 [2024-04-24 22:15:30.666944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.550 [2024-04-24 22:15:30.666972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.550 [2024-04-24 22:15:30.666989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.550 [2024-04-24 22:15:30.667283] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.550 [2024-04-24 22:15:30.667595] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.550 [2024-04-24 22:15:30.667619] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.550 [2024-04-24 22:15:30.667640] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.550 [2024-04-24 22:15:30.672171] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.550 [2024-04-24 22:15:30.681261] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.550 [2024-04-24 22:15:30.681830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.550 [2024-04-24 22:15:30.682026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.550 [2024-04-24 22:15:30.682054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.550 [2024-04-24 22:15:30.682071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.550 [2024-04-24 22:15:30.682365] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.550 [2024-04-24 22:15:30.682678] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.550 [2024-04-24 22:15:30.682702] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.550 [2024-04-24 22:15:30.682717] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.550 [2024-04-24 22:15:30.687249] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.550 [2024-04-24 22:15:30.696338] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.550 [2024-04-24 22:15:30.696889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.550 [2024-04-24 22:15:30.697140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.550 [2024-04-24 22:15:30.697168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.550 [2024-04-24 22:15:30.697185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.550 [2024-04-24 22:15:30.697493] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.550 [2024-04-24 22:15:30.697792] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.550 [2024-04-24 22:15:30.697815] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.551 [2024-04-24 22:15:30.697830] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.551 [2024-04-24 22:15:30.702378] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.551 [2024-04-24 22:15:30.711199] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.551 [2024-04-24 22:15:30.711791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.551 [2024-04-24 22:15:30.711997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.551 [2024-04-24 22:15:30.712025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.551 [2024-04-24 22:15:30.712041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.551 [2024-04-24 22:15:30.712335] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.551 [2024-04-24 22:15:30.712645] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.551 [2024-04-24 22:15:30.712669] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.551 [2024-04-24 22:15:30.712684] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.551 [2024-04-24 22:15:30.717218] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.551 [2024-04-24 22:15:30.726048] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.551 [2024-04-24 22:15:30.726594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.551 [2024-04-24 22:15:30.726836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.551 [2024-04-24 22:15:30.726863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.551 [2024-04-24 22:15:30.726880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.551 [2024-04-24 22:15:30.727175] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.551 [2024-04-24 22:15:30.727484] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.551 [2024-04-24 22:15:30.727508] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.551 [2024-04-24 22:15:30.727523] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.551 [2024-04-24 22:15:30.732052] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.551 [2024-04-24 22:15:30.741141] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.551 [2024-04-24 22:15:30.741654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.551 [2024-04-24 22:15:30.741829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.551 [2024-04-24 22:15:30.741857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.551 [2024-04-24 22:15:30.741874] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.551 [2024-04-24 22:15:30.742168] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.551 [2024-04-24 22:15:30.742480] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.551 [2024-04-24 22:15:30.742504] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.551 [2024-04-24 22:15:30.742519] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.551 [2024-04-24 22:15:30.747047] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.551 [2024-04-24 22:15:30.756138] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.551 [2024-04-24 22:15:30.756699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.551 [2024-04-24 22:15:30.756944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.551 [2024-04-24 22:15:30.756972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.551 [2024-04-24 22:15:30.756989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.551 [2024-04-24 22:15:30.757283] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.551 [2024-04-24 22:15:30.757595] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.551 [2024-04-24 22:15:30.757619] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.551 [2024-04-24 22:15:30.757634] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.551 [2024-04-24 22:15:30.762164] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.551 [2024-04-24 22:15:30.770996] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.551 [2024-04-24 22:15:30.771518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.551 [2024-04-24 22:15:30.771711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.551 [2024-04-24 22:15:30.771738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.551 [2024-04-24 22:15:30.771755] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.551 [2024-04-24 22:15:30.772049] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.551 [2024-04-24 22:15:30.772347] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.551 [2024-04-24 22:15:30.772370] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.551 [2024-04-24 22:15:30.772384] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.551 [2024-04-24 22:15:30.776927] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.551 [2024-04-24 22:15:30.786024] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.551 [2024-04-24 22:15:30.786708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.551 [2024-04-24 22:15:30.786946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.551 [2024-04-24 22:15:30.786976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.551 [2024-04-24 22:15:30.786994] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.551 [2024-04-24 22:15:30.787295] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.551 [2024-04-24 22:15:30.787607] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.551 [2024-04-24 22:15:30.787631] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.551 [2024-04-24 22:15:30.787646] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.551 [2024-04-24 22:15:30.792183] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.551 [2024-04-24 22:15:30.801026] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.551 [2024-04-24 22:15:30.801538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.551 [2024-04-24 22:15:30.801693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.551 [2024-04-24 22:15:30.801722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.551 [2024-04-24 22:15:30.801739] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.551 [2024-04-24 22:15:30.802034] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.551 [2024-04-24 22:15:30.802342] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.551 [2024-04-24 22:15:30.802367] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.551 [2024-04-24 22:15:30.802383] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.810 [2024-04-24 22:15:30.806981] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.810 [2024-04-24 22:15:30.816116] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.810 [2024-04-24 22:15:30.816616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.810 [2024-04-24 22:15:30.816778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.810 [2024-04-24 22:15:30.816806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.810 [2024-04-24 22:15:30.816823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.810 [2024-04-24 22:15:30.817118] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.811 [2024-04-24 22:15:30.817427] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.811 [2024-04-24 22:15:30.817451] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.811 [2024-04-24 22:15:30.817467] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.811 [2024-04-24 22:15:30.821991] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.811 [2024-04-24 22:15:30.831076] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.811 [2024-04-24 22:15:30.831536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.811 [2024-04-24 22:15:30.831696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.811 [2024-04-24 22:15:30.831724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.811 [2024-04-24 22:15:30.831741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.811 [2024-04-24 22:15:30.832036] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.811 [2024-04-24 22:15:30.832334] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.811 [2024-04-24 22:15:30.832357] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.811 [2024-04-24 22:15:30.832373] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.811 [2024-04-24 22:15:30.836911] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.811 [2024-04-24 22:15:30.845999] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.811 [2024-04-24 22:15:30.846566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.811 [2024-04-24 22:15:30.846840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.811 [2024-04-24 22:15:30.846868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.811 [2024-04-24 22:15:30.846885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.811 [2024-04-24 22:15:30.847179] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.811 [2024-04-24 22:15:30.847490] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.811 [2024-04-24 22:15:30.847514] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.811 [2024-04-24 22:15:30.847529] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.811 [2024-04-24 22:15:30.852069] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.811 [2024-04-24 22:15:30.860891] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.811 [2024-04-24 22:15:30.861442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.811 [2024-04-24 22:15:30.861636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.811 [2024-04-24 22:15:30.861664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.811 [2024-04-24 22:15:30.861681] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.811 [2024-04-24 22:15:30.861975] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.811 [2024-04-24 22:15:30.862273] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.811 [2024-04-24 22:15:30.862296] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.811 [2024-04-24 22:15:30.862311] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.811 [2024-04-24 22:15:30.866861] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.811 [2024-04-24 22:15:30.875981] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.811 [2024-04-24 22:15:30.876515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.811 [2024-04-24 22:15:30.876680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.811 [2024-04-24 22:15:30.876708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.811 [2024-04-24 22:15:30.876725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.811 [2024-04-24 22:15:30.877019] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.811 [2024-04-24 22:15:30.877318] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.811 [2024-04-24 22:15:30.877341] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.811 [2024-04-24 22:15:30.877356] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.811 [2024-04-24 22:15:30.881889] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.811 [2024-04-24 22:15:30.890993] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.811 [2024-04-24 22:15:30.891508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.811 [2024-04-24 22:15:30.891714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.811 [2024-04-24 22:15:30.891745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.811 [2024-04-24 22:15:30.891762] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.811 [2024-04-24 22:15:30.892056] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.811 [2024-04-24 22:15:30.892355] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.811 [2024-04-24 22:15:30.892377] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.811 [2024-04-24 22:15:30.892400] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.811 [2024-04-24 22:15:30.896938] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.811 [2024-04-24 22:15:30.906039] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.811 [2024-04-24 22:15:30.906528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.811 [2024-04-24 22:15:30.906694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.811 [2024-04-24 22:15:30.906722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.811 [2024-04-24 22:15:30.906745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.811 [2024-04-24 22:15:30.907040] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.811 [2024-04-24 22:15:30.907339] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.811 [2024-04-24 22:15:30.907362] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.811 [2024-04-24 22:15:30.907378] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.811 [2024-04-24 22:15:30.911911] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.811 [2024-04-24 22:15:30.920999] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.811 [2024-04-24 22:15:30.921510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.811 [2024-04-24 22:15:30.921644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.811 [2024-04-24 22:15:30.921671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.811 [2024-04-24 22:15:30.921688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.811 [2024-04-24 22:15:30.921982] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.811 [2024-04-24 22:15:30.922280] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.811 [2024-04-24 22:15:30.922304] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.811 [2024-04-24 22:15:30.922319] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.811 [2024-04-24 22:15:30.926863] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.811 [2024-04-24 22:15:30.935972] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.811 [2024-04-24 22:15:30.936612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.811 [2024-04-24 22:15:30.936856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.811 [2024-04-24 22:15:30.936887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.811 [2024-04-24 22:15:30.936906] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.811 [2024-04-24 22:15:30.937206] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.811 [2024-04-24 22:15:30.937520] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.811 [2024-04-24 22:15:30.937545] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.811 [2024-04-24 22:15:30.937561] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.811 [2024-04-24 22:15:30.942100] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.811 [2024-04-24 22:15:30.950964] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.811 [2024-04-24 22:15:30.951481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.811 [2024-04-24 22:15:30.951672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.811 [2024-04-24 22:15:30.951701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.811 [2024-04-24 22:15:30.951717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.811 [2024-04-24 22:15:30.952017] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.811 [2024-04-24 22:15:30.952317] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.811 [2024-04-24 22:15:30.952341] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.811 [2024-04-24 22:15:30.952356] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.811 [2024-04-24 22:15:30.956897] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.812 [2024-04-24 22:15:30.965984] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.812 [2024-04-24 22:15:30.966517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.812 [2024-04-24 22:15:30.966760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.812 [2024-04-24 22:15:30.966788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.812 [2024-04-24 22:15:30.966805] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.812 [2024-04-24 22:15:30.967098] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.812 [2024-04-24 22:15:30.967405] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.812 [2024-04-24 22:15:30.967429] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.812 [2024-04-24 22:15:30.967444] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.812 [2024-04-24 22:15:30.971969] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.812 [2024-04-24 22:15:30.981076] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.812 [2024-04-24 22:15:30.981633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.812 [2024-04-24 22:15:30.981932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.812 [2024-04-24 22:15:30.981961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.812 [2024-04-24 22:15:30.981978] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.812 [2024-04-24 22:15:30.982272] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.812 [2024-04-24 22:15:30.982590] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.812 [2024-04-24 22:15:30.982615] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.812 [2024-04-24 22:15:30.982630] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.812 [2024-04-24 22:15:30.987162] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.812 [2024-04-24 22:15:30.995994] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.812 [2024-04-24 22:15:30.996509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.812 [2024-04-24 22:15:30.996652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.812 [2024-04-24 22:15:30.996680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.812 [2024-04-24 22:15:30.996697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.812 [2024-04-24 22:15:30.996997] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.812 [2024-04-24 22:15:30.997296] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.812 [2024-04-24 22:15:30.997319] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.812 [2024-04-24 22:15:30.997334] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.812 [2024-04-24 22:15:31.001877] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.812 [2024-04-24 22:15:31.010972] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.812 [2024-04-24 22:15:31.011478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.812 [2024-04-24 22:15:31.011669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.812 [2024-04-24 22:15:31.011697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.812 [2024-04-24 22:15:31.011714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.812 [2024-04-24 22:15:31.012008] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.812 [2024-04-24 22:15:31.012306] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.812 [2024-04-24 22:15:31.012330] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.812 [2024-04-24 22:15:31.012345] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.812 [2024-04-24 22:15:31.016884] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.812 [2024-04-24 22:15:31.025977] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.812 [2024-04-24 22:15:31.026522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.812 [2024-04-24 22:15:31.026721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.812 [2024-04-24 22:15:31.026750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.812 [2024-04-24 22:15:31.026767] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.812 [2024-04-24 22:15:31.027060] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.812 [2024-04-24 22:15:31.027359] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.812 [2024-04-24 22:15:31.027382] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.812 [2024-04-24 22:15:31.027409] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.812 [2024-04-24 22:15:31.031944] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.812 [2024-04-24 22:15:31.041040] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.812 [2024-04-24 22:15:31.041585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.812 [2024-04-24 22:15:31.041797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.812 [2024-04-24 22:15:31.041825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.812 [2024-04-24 22:15:31.041842] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.812 [2024-04-24 22:15:31.042135] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.812 [2024-04-24 22:15:31.042453] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.812 [2024-04-24 22:15:31.042482] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.812 [2024-04-24 22:15:31.042498] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.812 [2024-04-24 22:15:31.047029] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.812 [2024-04-24 22:15:31.056124] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.812 [2024-04-24 22:15:31.056658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.812 [2024-04-24 22:15:31.056805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.812 [2024-04-24 22:15:31.056834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:48.812 [2024-04-24 22:15:31.056851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:48.812 [2024-04-24 22:15:31.057148] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:48.812 [2024-04-24 22:15:31.057467] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.812 [2024-04-24 22:15:31.057492] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.812 [2024-04-24 22:15:31.057507] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.812 [2024-04-24 22:15:31.062087] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.072 [2024-04-24 22:15:31.071229] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.072 [2024-04-24 22:15:31.071717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.072 [2024-04-24 22:15:31.071930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.072 [2024-04-24 22:15:31.071959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.072 [2024-04-24 22:15:31.071976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.072 [2024-04-24 22:15:31.072270] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.072 [2024-04-24 22:15:31.072580] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.072 [2024-04-24 22:15:31.072604] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.072 [2024-04-24 22:15:31.072620] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.072 [2024-04-24 22:15:31.077153] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.072 [2024-04-24 22:15:31.086257] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.072 [2024-04-24 22:15:31.086787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.072 [2024-04-24 22:15:31.086998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.072 [2024-04-24 22:15:31.087026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.072 [2024-04-24 22:15:31.087043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.072 [2024-04-24 22:15:31.087338] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.072 [2024-04-24 22:15:31.087649] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.072 [2024-04-24 22:15:31.087673] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.072 [2024-04-24 22:15:31.087694] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.072 [2024-04-24 22:15:31.092225] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.072 [2024-04-24 22:15:31.101329] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.072 [2024-04-24 22:15:31.101907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.072 [2024-04-24 22:15:31.102148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.072 [2024-04-24 22:15:31.102179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.072 [2024-04-24 22:15:31.102196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.072 [2024-04-24 22:15:31.102504] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.072 [2024-04-24 22:15:31.102804] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.072 [2024-04-24 22:15:31.102827] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.072 [2024-04-24 22:15:31.102842] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.072 [2024-04-24 22:15:31.107374] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.072 [2024-04-24 22:15:31.116197] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.072 [2024-04-24 22:15:31.116759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.072 [2024-04-24 22:15:31.116990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.072 [2024-04-24 22:15:31.117018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.072 [2024-04-24 22:15:31.117035] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.072 [2024-04-24 22:15:31.117329] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.072 [2024-04-24 22:15:31.117640] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.072 [2024-04-24 22:15:31.117664] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.072 [2024-04-24 22:15:31.117679] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.072 [2024-04-24 22:15:31.122207] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.072 [2024-04-24 22:15:31.131297] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.072 [2024-04-24 22:15:31.131860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.072 [2024-04-24 22:15:31.132072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.072 [2024-04-24 22:15:31.132100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.072 [2024-04-24 22:15:31.132117] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.072 [2024-04-24 22:15:31.132422] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.072 [2024-04-24 22:15:31.132721] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.072 [2024-04-24 22:15:31.132745] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.072 [2024-04-24 22:15:31.132765] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.072 [2024-04-24 22:15:31.137293] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.072 [2024-04-24 22:15:31.146381] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.072 [2024-04-24 22:15:31.147041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.072 [2024-04-24 22:15:31.147303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.072 [2024-04-24 22:15:31.147334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.072 [2024-04-24 22:15:31.147352] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.072 [2024-04-24 22:15:31.147671] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.072 [2024-04-24 22:15:31.147973] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.072 [2024-04-24 22:15:31.147996] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.072 [2024-04-24 22:15:31.148011] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.072 [2024-04-24 22:15:31.152556] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.072 [2024-04-24 22:15:31.161374] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.072 [2024-04-24 22:15:31.161916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.072 [2024-04-24 22:15:31.162131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.072 [2024-04-24 22:15:31.162159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.072 [2024-04-24 22:15:31.162176] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.072 [2024-04-24 22:15:31.162486] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.072 [2024-04-24 22:15:31.162786] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.072 [2024-04-24 22:15:31.162810] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.072 [2024-04-24 22:15:31.162824] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.073 [2024-04-24 22:15:31.167355] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.073 [2024-04-24 22:15:31.176453] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.073 [2024-04-24 22:15:31.176978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.073 [2024-04-24 22:15:31.177159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.073 [2024-04-24 22:15:31.177187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.073 [2024-04-24 22:15:31.177204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.073 [2024-04-24 22:15:31.177512] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.073 [2024-04-24 22:15:31.177812] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.073 [2024-04-24 22:15:31.177835] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.073 [2024-04-24 22:15:31.177850] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.073 [2024-04-24 22:15:31.182381] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.073 [2024-04-24 22:15:31.191495] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.073 [2024-04-24 22:15:31.191989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.073 [2024-04-24 22:15:31.192155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.073 [2024-04-24 22:15:31.192183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.073 [2024-04-24 22:15:31.192200] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.073 [2024-04-24 22:15:31.192505] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.073 [2024-04-24 22:15:31.192804] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.073 [2024-04-24 22:15:31.192827] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.073 [2024-04-24 22:15:31.192843] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.073 [2024-04-24 22:15:31.197376] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.073 [2024-04-24 22:15:31.206524] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.073 [2024-04-24 22:15:31.207000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.073 [2024-04-24 22:15:31.207178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.073 [2024-04-24 22:15:31.207206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.073 [2024-04-24 22:15:31.207223] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.073 [2024-04-24 22:15:31.207527] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.073 [2024-04-24 22:15:31.207828] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.073 [2024-04-24 22:15:31.207852] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.073 [2024-04-24 22:15:31.207867] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.073 [2024-04-24 22:15:31.212411] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.073 [2024-04-24 22:15:31.221530] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.073 [2024-04-24 22:15:31.222055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.073 [2024-04-24 22:15:31.222271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.073 [2024-04-24 22:15:31.222299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.073 [2024-04-24 22:15:31.222316] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.073 [2024-04-24 22:15:31.222620] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.073 [2024-04-24 22:15:31.222926] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.073 [2024-04-24 22:15:31.222949] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.073 [2024-04-24 22:15:31.222964] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.073 [2024-04-24 22:15:31.227549] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.073 [2024-04-24 22:15:31.236385] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.073 [2024-04-24 22:15:31.236897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.073 [2024-04-24 22:15:31.237099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.073 [2024-04-24 22:15:31.237127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.073 [2024-04-24 22:15:31.237144] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.073 [2024-04-24 22:15:31.237449] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.073 [2024-04-24 22:15:31.237748] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.073 [2024-04-24 22:15:31.237771] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.073 [2024-04-24 22:15:31.237786] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.073 [2024-04-24 22:15:31.242321] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.073 [2024-04-24 22:15:31.251447] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.073 [2024-04-24 22:15:31.251928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.073 [2024-04-24 22:15:31.252192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.073 [2024-04-24 22:15:31.252220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.073 [2024-04-24 22:15:31.252236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.073 [2024-04-24 22:15:31.252542] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.073 [2024-04-24 22:15:31.252841] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.073 [2024-04-24 22:15:31.252864] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.073 [2024-04-24 22:15:31.252879] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.073 [2024-04-24 22:15:31.257425] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.073 [2024-04-24 22:15:31.266550] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.073 [2024-04-24 22:15:31.267052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.073 [2024-04-24 22:15:31.267235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.073 [2024-04-24 22:15:31.267263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.073 [2024-04-24 22:15:31.267279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.073 [2024-04-24 22:15:31.267594] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.073 [2024-04-24 22:15:31.267894] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.073 [2024-04-24 22:15:31.267917] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.073 [2024-04-24 22:15:31.267932] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.073 [2024-04-24 22:15:31.272477] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.073 [2024-04-24 22:15:31.281577] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.073 [2024-04-24 22:15:31.282092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.073 [2024-04-24 22:15:31.282297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.073 [2024-04-24 22:15:31.282325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.073 [2024-04-24 22:15:31.282342] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.073 [2024-04-24 22:15:31.282651] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.073 [2024-04-24 22:15:31.282951] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.073 [2024-04-24 22:15:31.282974] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.073 [2024-04-24 22:15:31.282989] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.073 [2024-04-24 22:15:31.287553] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.073 [2024-04-24 22:15:31.296666] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.073 [2024-04-24 22:15:31.297136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.073 [2024-04-24 22:15:31.297314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.073 [2024-04-24 22:15:31.297342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.073 [2024-04-24 22:15:31.297359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.073 [2024-04-24 22:15:31.297664] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.073 [2024-04-24 22:15:31.297963] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.073 [2024-04-24 22:15:31.297986] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.073 [2024-04-24 22:15:31.298001] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.073 [2024-04-24 22:15:31.302546] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.073 [2024-04-24 22:15:31.311654] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.073 [2024-04-24 22:15:31.312157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.073 [2024-04-24 22:15:31.312363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.073 [2024-04-24 22:15:31.312391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.074 [2024-04-24 22:15:31.312420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.074 [2024-04-24 22:15:31.312721] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.074 [2024-04-24 22:15:31.313030] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.074 [2024-04-24 22:15:31.313054] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.074 [2024-04-24 22:15:31.313069] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.074 [2024-04-24 22:15:31.317674] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.333 [2024-04-24 22:15:31.326568] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.333 [2024-04-24 22:15:31.327095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.333 [2024-04-24 22:15:31.327257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.333 [2024-04-24 22:15:31.327285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.333 [2024-04-24 22:15:31.327307] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.333 [2024-04-24 22:15:31.327614] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.333 [2024-04-24 22:15:31.327913] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.333 [2024-04-24 22:15:31.327935] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.333 [2024-04-24 22:15:31.327951] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.334 [2024-04-24 22:15:31.332499] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.334 [2024-04-24 22:15:31.341611] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.334 [2024-04-24 22:15:31.342144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.334 [2024-04-24 22:15:31.342352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.334 [2024-04-24 22:15:31.342380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.334 [2024-04-24 22:15:31.342408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.334 [2024-04-24 22:15:31.342705] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.334 [2024-04-24 22:15:31.343003] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.334 [2024-04-24 22:15:31.343027] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.334 [2024-04-24 22:15:31.343041] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.334 [2024-04-24 22:15:31.347587] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.334 [2024-04-24 22:15:31.356696] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.334 [2024-04-24 22:15:31.357260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.334 [2024-04-24 22:15:31.357429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.334 [2024-04-24 22:15:31.357458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.334 [2024-04-24 22:15:31.357475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.334 [2024-04-24 22:15:31.357769] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.334 [2024-04-24 22:15:31.358069] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.334 [2024-04-24 22:15:31.358092] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.334 [2024-04-24 22:15:31.358107] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.334 [2024-04-24 22:15:31.362653] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.334 [2024-04-24 22:15:31.371754] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.334 [2024-04-24 22:15:31.372368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.334 [2024-04-24 22:15:31.372599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.334 [2024-04-24 22:15:31.372631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.334 [2024-04-24 22:15:31.372654] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.334 [2024-04-24 22:15:31.372956] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.334 [2024-04-24 22:15:31.373255] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.334 [2024-04-24 22:15:31.373279] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.334 [2024-04-24 22:15:31.373294] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.334 [2024-04-24 22:15:31.377833] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.334 [2024-04-24 22:15:31.386680] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.334 [2024-04-24 22:15:31.387232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.334 [2024-04-24 22:15:31.387484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.334 [2024-04-24 22:15:31.387514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.334 [2024-04-24 22:15:31.387531] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.334 [2024-04-24 22:15:31.387825] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.334 [2024-04-24 22:15:31.388124] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.334 [2024-04-24 22:15:31.388147] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.334 [2024-04-24 22:15:31.388162] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.334 [2024-04-24 22:15:31.392705] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.334 [2024-04-24 22:15:31.401527] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.334 [2024-04-24 22:15:31.402072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.334 [2024-04-24 22:15:31.402338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.334 [2024-04-24 22:15:31.402366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.334 [2024-04-24 22:15:31.402383] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.334 [2024-04-24 22:15:31.402689] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.334 [2024-04-24 22:15:31.402988] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.334 [2024-04-24 22:15:31.403011] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.334 [2024-04-24 22:15:31.403026] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.334 [2024-04-24 22:15:31.407565] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.334 [2024-04-24 22:15:31.416377] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.334 [2024-04-24 22:15:31.416935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.334 [2024-04-24 22:15:31.417162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.334 [2024-04-24 22:15:31.417190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.334 [2024-04-24 22:15:31.417207] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.334 [2024-04-24 22:15:31.417519] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.334 [2024-04-24 22:15:31.417819] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.334 [2024-04-24 22:15:31.417842] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.334 [2024-04-24 22:15:31.417857] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.334 [2024-04-24 22:15:31.422383] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.334 [2024-04-24 22:15:31.431472] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.334 [2024-04-24 22:15:31.432021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.334 [2024-04-24 22:15:31.432260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.334 [2024-04-24 22:15:31.432288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.334 [2024-04-24 22:15:31.432305] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.334 [2024-04-24 22:15:31.432609] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.334 [2024-04-24 22:15:31.432909] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.334 [2024-04-24 22:15:31.432932] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.334 [2024-04-24 22:15:31.432947] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.334 [2024-04-24 22:15:31.437481] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.334 [2024-04-24 22:15:31.446568] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.334 [2024-04-24 22:15:31.447115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.334 [2024-04-24 22:15:31.447382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.334 [2024-04-24 22:15:31.447427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.334 [2024-04-24 22:15:31.447445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.334 [2024-04-24 22:15:31.447739] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.334 [2024-04-24 22:15:31.448037] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.334 [2024-04-24 22:15:31.448060] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.334 [2024-04-24 22:15:31.448075] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.334 [2024-04-24 22:15:31.452615] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.334 [2024-04-24 22:15:31.461436] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.334 [2024-04-24 22:15:31.461992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.334 [2024-04-24 22:15:31.462226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.334 [2024-04-24 22:15:31.462254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.334 [2024-04-24 22:15:31.462271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.334 [2024-04-24 22:15:31.462583] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.334 [2024-04-24 22:15:31.462887] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.334 [2024-04-24 22:15:31.462911] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.334 [2024-04-24 22:15:31.462925] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.334 [2024-04-24 22:15:31.467459] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.334 [2024-04-24 22:15:31.476414] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.335 [2024-04-24 22:15:31.476969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.335 [2024-04-24 22:15:31.477201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.335 [2024-04-24 22:15:31.477229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.335 [2024-04-24 22:15:31.477246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.335 [2024-04-24 22:15:31.477552] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.335 [2024-04-24 22:15:31.477854] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.335 [2024-04-24 22:15:31.477877] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.335 [2024-04-24 22:15:31.477892] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.335 [2024-04-24 22:15:31.482436] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.335 [2024-04-24 22:15:31.491522] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.335 [2024-04-24 22:15:31.492046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.335 [2024-04-24 22:15:31.492307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.335 [2024-04-24 22:15:31.492343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.335 [2024-04-24 22:15:31.492360] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.335 [2024-04-24 22:15:31.492669] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.335 [2024-04-24 22:15:31.492968] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.335 [2024-04-24 22:15:31.492992] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.335 [2024-04-24 22:15:31.493006] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.335 [2024-04-24 22:15:31.497543] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.335 [2024-04-24 22:15:31.506628] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.335 [2024-04-24 22:15:31.507172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.335 [2024-04-24 22:15:31.507438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.335 [2024-04-24 22:15:31.507483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.335 [2024-04-24 22:15:31.507501] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.335 [2024-04-24 22:15:31.507796] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.335 [2024-04-24 22:15:31.508094] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.335 [2024-04-24 22:15:31.508123] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.335 [2024-04-24 22:15:31.508139] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.335 [2024-04-24 22:15:31.512677] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.335 [2024-04-24 22:15:31.521513] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.335 [2024-04-24 22:15:31.522029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.335 [2024-04-24 22:15:31.522205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.335 [2024-04-24 22:15:31.522234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.335 [2024-04-24 22:15:31.522251] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.335 [2024-04-24 22:15:31.522558] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.335 [2024-04-24 22:15:31.522858] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.335 [2024-04-24 22:15:31.522881] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.335 [2024-04-24 22:15:31.522896] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.335 [2024-04-24 22:15:31.527432] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.335 [2024-04-24 22:15:31.536513] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.335 [2024-04-24 22:15:31.537059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.335 [2024-04-24 22:15:31.537308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.335 [2024-04-24 22:15:31.537336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.335 [2024-04-24 22:15:31.537352] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.335 [2024-04-24 22:15:31.537658] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.335 [2024-04-24 22:15:31.537957] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.335 [2024-04-24 22:15:31.537980] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.335 [2024-04-24 22:15:31.537995] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.335 [2024-04-24 22:15:31.542531] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.335 [2024-04-24 22:15:31.551612] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.335 [2024-04-24 22:15:31.552164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.335 [2024-04-24 22:15:31.552432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.335 [2024-04-24 22:15:31.552461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.335 [2024-04-24 22:15:31.552478] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.335 [2024-04-24 22:15:31.552773] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.335 [2024-04-24 22:15:31.553070] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.335 [2024-04-24 22:15:31.553093] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.335 [2024-04-24 22:15:31.553118] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.335 [2024-04-24 22:15:31.557660] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.335 [2024-04-24 22:15:31.566479] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.335 [2024-04-24 22:15:31.566985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.335 [2024-04-24 22:15:31.567195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.335 [2024-04-24 22:15:31.567223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.335 [2024-04-24 22:15:31.567240] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.335 [2024-04-24 22:15:31.567551] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.335 [2024-04-24 22:15:31.567855] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.335 [2024-04-24 22:15:31.567879] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.335 [2024-04-24 22:15:31.567895] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.335 [2024-04-24 22:15:31.572500] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.335 [2024-04-24 22:15:31.581351] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.335 [2024-04-24 22:15:31.581898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.335 [2024-04-24 22:15:31.582045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.335 [2024-04-24 22:15:31.582073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.335 [2024-04-24 22:15:31.582091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.335 [2024-04-24 22:15:31.582385] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.335 [2024-04-24 22:15:31.582700] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.335 [2024-04-24 22:15:31.582724] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.335 [2024-04-24 22:15:31.582739] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.335 [2024-04-24 22:15:31.587270] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.595 [2024-04-24 22:15:31.596364] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.595 [2024-04-24 22:15:31.596880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.595 [2024-04-24 22:15:31.597093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.595 [2024-04-24 22:15:31.597121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.595 [2024-04-24 22:15:31.597138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.595 [2024-04-24 22:15:31.597663] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.595 [2024-04-24 22:15:31.597970] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.595 [2024-04-24 22:15:31.597993] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.595 [2024-04-24 22:15:31.598008] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.595 [2024-04-24 22:15:31.602551] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.595 [2024-04-24 22:15:31.611375] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.595 [2024-04-24 22:15:31.611886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.595 [2024-04-24 22:15:31.612086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.595 [2024-04-24 22:15:31.612114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.595 [2024-04-24 22:15:31.612130] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.595 [2024-04-24 22:15:31.612433] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.595 [2024-04-24 22:15:31.612733] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.595 [2024-04-24 22:15:31.612755] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.595 [2024-04-24 22:15:31.612770] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.595 [2024-04-24 22:15:31.617300] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.595 [2024-04-24 22:15:31.626400] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.595 [2024-04-24 22:15:31.626901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.595 [2024-04-24 22:15:31.627090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.595 [2024-04-24 22:15:31.627117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.595 [2024-04-24 22:15:31.627134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.595 [2024-04-24 22:15:31.627440] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.595 [2024-04-24 22:15:31.627739] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.595 [2024-04-24 22:15:31.627762] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.595 [2024-04-24 22:15:31.627777] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.595 [2024-04-24 22:15:31.632304] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.595 [2024-04-24 22:15:31.641398] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.595 [2024-04-24 22:15:31.641948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.595 [2024-04-24 22:15:31.642180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.595 [2024-04-24 22:15:31.642208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.595 [2024-04-24 22:15:31.642224] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.595 [2024-04-24 22:15:31.642530] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.595 [2024-04-24 22:15:31.642830] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.595 [2024-04-24 22:15:31.642853] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.595 [2024-04-24 22:15:31.642868] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.595 [2024-04-24 22:15:31.647402] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.595 [2024-04-24 22:15:31.656482] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.595 [2024-04-24 22:15:31.657028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.595 [2024-04-24 22:15:31.657260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.595 [2024-04-24 22:15:31.657288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.595 [2024-04-24 22:15:31.657304] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.595 [2024-04-24 22:15:31.657609] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.595 [2024-04-24 22:15:31.657908] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.595 [2024-04-24 22:15:31.657931] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.595 [2024-04-24 22:15:31.657946] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.595 [2024-04-24 22:15:31.662482] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.595 [2024-04-24 22:15:31.671567] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.595 [2024-04-24 22:15:31.672064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.595 [2024-04-24 22:15:31.672236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.595 [2024-04-24 22:15:31.672264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.595 [2024-04-24 22:15:31.672281] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.595 [2024-04-24 22:15:31.672583] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.595 [2024-04-24 22:15:31.672882] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.595 [2024-04-24 22:15:31.672905] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.595 [2024-04-24 22:15:31.672920] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.595 [2024-04-24 22:15:31.677455] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.595 [2024-04-24 22:15:31.686555] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.595 [2024-04-24 22:15:31.687067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.595 [2024-04-24 22:15:31.687314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.595 [2024-04-24 22:15:31.687341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.595 [2024-04-24 22:15:31.687358] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.595 [2024-04-24 22:15:31.687662] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.595 [2024-04-24 22:15:31.687961] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.595 [2024-04-24 22:15:31.687984] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.595 [2024-04-24 22:15:31.688000] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.595 [2024-04-24 22:15:31.692537] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.595 [2024-04-24 22:15:31.701624] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.595 [2024-04-24 22:15:31.702136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.595 [2024-04-24 22:15:31.702369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.595 [2024-04-24 22:15:31.702405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.596 [2024-04-24 22:15:31.702424] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.596 [2024-04-24 22:15:31.702719] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.596 [2024-04-24 22:15:31.703017] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.596 [2024-04-24 22:15:31.703040] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.596 [2024-04-24 22:15:31.703055] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.596 [2024-04-24 22:15:31.707590] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.596 [2024-04-24 22:15:31.716674] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.596 [2024-04-24 22:15:31.717214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.596 [2024-04-24 22:15:31.717425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.596 [2024-04-24 22:15:31.717454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.596 [2024-04-24 22:15:31.717471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.596 [2024-04-24 22:15:31.717765] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.596 [2024-04-24 22:15:31.718064] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.596 [2024-04-24 22:15:31.718087] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.596 [2024-04-24 22:15:31.718102] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.596 [2024-04-24 22:15:31.722640] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.596 [2024-04-24 22:15:31.731742] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.596 [2024-04-24 22:15:31.732273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.596 [2024-04-24 22:15:31.732530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.596 [2024-04-24 22:15:31.732559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.596 [2024-04-24 22:15:31.732575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.596 [2024-04-24 22:15:31.732870] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.596 [2024-04-24 22:15:31.733169] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.596 [2024-04-24 22:15:31.733192] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.596 [2024-04-24 22:15:31.733207] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.596 [2024-04-24 22:15:31.737744] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.596 [2024-04-24 22:15:31.746833] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.596 [2024-04-24 22:15:31.747411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.596 [2024-04-24 22:15:31.747665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.596 [2024-04-24 22:15:31.747698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.596 [2024-04-24 22:15:31.747716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.596 [2024-04-24 22:15:31.748010] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.596 [2024-04-24 22:15:31.748310] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.596 [2024-04-24 22:15:31.748333] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.596 [2024-04-24 22:15:31.748348] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.596 [2024-04-24 22:15:31.752885] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.596 [2024-04-24 22:15:31.761705] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.596 [2024-04-24 22:15:31.762251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.596 [2024-04-24 22:15:31.762498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.596 [2024-04-24 22:15:31.762527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.596 [2024-04-24 22:15:31.762543] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.596 [2024-04-24 22:15:31.762838] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.596 [2024-04-24 22:15:31.763137] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.596 [2024-04-24 22:15:31.763160] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.596 [2024-04-24 22:15:31.763176] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.596 [2024-04-24 22:15:31.767713] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.596 [2024-04-24 22:15:31.776798] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.596 [2024-04-24 22:15:31.777295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.596 [2024-04-24 22:15:31.777497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.596 [2024-04-24 22:15:31.777526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.596 [2024-04-24 22:15:31.777543] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.596 [2024-04-24 22:15:31.777837] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.596 [2024-04-24 22:15:31.778136] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.596 [2024-04-24 22:15:31.778159] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.596 [2024-04-24 22:15:31.778174] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.596 [2024-04-24 22:15:31.782717] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.596 [2024-04-24 22:15:31.791802] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.596 [2024-04-24 22:15:31.792399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.596 [2024-04-24 22:15:31.792603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.596 [2024-04-24 22:15:31.792630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.596 [2024-04-24 22:15:31.792653] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.596 [2024-04-24 22:15:31.792948] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.596 [2024-04-24 22:15:31.793247] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.596 [2024-04-24 22:15:31.793270] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.596 [2024-04-24 22:15:31.793286] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.596 [2024-04-24 22:15:31.797821] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.596 [2024-04-24 22:15:31.806910] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.596 [2024-04-24 22:15:31.807458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.596 [2024-04-24 22:15:31.807645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.596 [2024-04-24 22:15:31.807672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.596 [2024-04-24 22:15:31.807689] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.596 [2024-04-24 22:15:31.807983] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.596 [2024-04-24 22:15:31.808282] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.596 [2024-04-24 22:15:31.808305] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.596 [2024-04-24 22:15:31.808321] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.596 [2024-04-24 22:15:31.812855] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.596 [2024-04-24 22:15:31.821939] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.596 [2024-04-24 22:15:31.822406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.596 [2024-04-24 22:15:31.822559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.596 [2024-04-24 22:15:31.822587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.596 [2024-04-24 22:15:31.822604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.596 [2024-04-24 22:15:31.822898] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.596 [2024-04-24 22:15:31.823202] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.596 [2024-04-24 22:15:31.823227] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.596 [2024-04-24 22:15:31.823247] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.596 [2024-04-24 22:15:31.827861] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.596 [2024-04-24 22:15:31.836986] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.596 [2024-04-24 22:15:31.837543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.596 [2024-04-24 22:15:31.837767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.596 [2024-04-24 22:15:31.837796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.596 [2024-04-24 22:15:31.837813] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.596 [2024-04-24 22:15:31.838112] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.596 [2024-04-24 22:15:31.838421] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.597 [2024-04-24 22:15:31.838449] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.597 [2024-04-24 22:15:31.838465] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.597 [2024-04-24 22:15:31.842990] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.856 [2024-04-24 22:15:31.852081] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.856 [2024-04-24 22:15:31.852687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.856 [2024-04-24 22:15:31.852947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.856 [2024-04-24 22:15:31.852975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.856 [2024-04-24 22:15:31.852991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.856 [2024-04-24 22:15:31.853285] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.856 [2024-04-24 22:15:31.853597] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.856 [2024-04-24 22:15:31.853622] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.856 [2024-04-24 22:15:31.853637] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.856 [2024-04-24 22:15:31.858164] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.856 [2024-04-24 22:15:31.866987] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.856 [2024-04-24 22:15:31.867574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.856 [2024-04-24 22:15:31.867814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.856 [2024-04-24 22:15:31.867842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.856 [2024-04-24 22:15:31.867858] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.856 [2024-04-24 22:15:31.868153] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.856 [2024-04-24 22:15:31.868463] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.856 [2024-04-24 22:15:31.868487] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.856 [2024-04-24 22:15:31.868503] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.856 [2024-04-24 22:15:31.873031] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.856 [2024-04-24 22:15:31.881851] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.856 [2024-04-24 22:15:31.882354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.856 [2024-04-24 22:15:31.882635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.856 [2024-04-24 22:15:31.882664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.856 [2024-04-24 22:15:31.882680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.856 [2024-04-24 22:15:31.882975] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.856 [2024-04-24 22:15:31.883279] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.856 [2024-04-24 22:15:31.883304] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.856 [2024-04-24 22:15:31.883319] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.856 [2024-04-24 22:15:31.887855] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.856 [2024-04-24 22:15:31.896942] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.856 [2024-04-24 22:15:31.897444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.856 [2024-04-24 22:15:31.897635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.856 [2024-04-24 22:15:31.897664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.856 [2024-04-24 22:15:31.897680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.856 [2024-04-24 22:15:31.897974] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.856 [2024-04-24 22:15:31.898273] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.856 [2024-04-24 22:15:31.898296] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.856 [2024-04-24 22:15:31.898311] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.856 [2024-04-24 22:15:31.902852] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.856 [2024-04-24 22:15:31.911937] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.856 [2024-04-24 22:15:31.912497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.856 [2024-04-24 22:15:31.912789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.856 [2024-04-24 22:15:31.912817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.857 [2024-04-24 22:15:31.912834] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.857 [2024-04-24 22:15:31.913128] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.857 [2024-04-24 22:15:31.913437] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.857 [2024-04-24 22:15:31.913462] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.857 [2024-04-24 22:15:31.913477] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.857 [2024-04-24 22:15:31.918002] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.857 [2024-04-24 22:15:31.926818] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.857 [2024-04-24 22:15:31.927444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.857 [2024-04-24 22:15:31.927724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.857 [2024-04-24 22:15:31.927755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.857 [2024-04-24 22:15:31.927773] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.857 [2024-04-24 22:15:31.928074] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.857 [2024-04-24 22:15:31.928374] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.857 [2024-04-24 22:15:31.928417] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.857 [2024-04-24 22:15:31.928434] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.857 [2024-04-24 22:15:31.932964] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.857 [2024-04-24 22:15:31.941791] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.857 [2024-04-24 22:15:31.942425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.857 [2024-04-24 22:15:31.942707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.857 [2024-04-24 22:15:31.942739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.857 [2024-04-24 22:15:31.942757] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.857 [2024-04-24 22:15:31.943058] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.857 [2024-04-24 22:15:31.943357] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.857 [2024-04-24 22:15:31.943381] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.857 [2024-04-24 22:15:31.943408] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.857 [2024-04-24 22:15:31.947942] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.857 [2024-04-24 22:15:31.956769] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.857 [2024-04-24 22:15:31.957309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.857 [2024-04-24 22:15:31.957482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.857 [2024-04-24 22:15:31.957513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.857 [2024-04-24 22:15:31.957530] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.857 [2024-04-24 22:15:31.957825] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.857 [2024-04-24 22:15:31.958125] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.857 [2024-04-24 22:15:31.958148] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.857 [2024-04-24 22:15:31.958164] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.857 [2024-04-24 22:15:31.962703] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.857 [2024-04-24 22:15:31.971793] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.857 [2024-04-24 22:15:31.972332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.857 [2024-04-24 22:15:31.972531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.857 [2024-04-24 22:15:31.972571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.857 [2024-04-24 22:15:31.972588] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.857 [2024-04-24 22:15:31.972882] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.857 [2024-04-24 22:15:31.973181] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.857 [2024-04-24 22:15:31.973205] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.857 [2024-04-24 22:15:31.973226] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.857 [2024-04-24 22:15:31.977765] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.857 [2024-04-24 22:15:31.986865] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.857 [2024-04-24 22:15:31.987380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.857 [2024-04-24 22:15:31.987562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.857 [2024-04-24 22:15:31.987591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.857 [2024-04-24 22:15:31.987608] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.857 [2024-04-24 22:15:31.987903] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.857 [2024-04-24 22:15:31.988202] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.857 [2024-04-24 22:15:31.988225] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.857 [2024-04-24 22:15:31.988240] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.857 [2024-04-24 22:15:31.992777] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.857 [2024-04-24 22:15:32.001866] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.857 [2024-04-24 22:15:32.002359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.857 [2024-04-24 22:15:32.002577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.857 [2024-04-24 22:15:32.002606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.857 [2024-04-24 22:15:32.002623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.857 [2024-04-24 22:15:32.002917] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.857 [2024-04-24 22:15:32.003217] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.857 [2024-04-24 22:15:32.003241] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.857 [2024-04-24 22:15:32.003255] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.857 [2024-04-24 22:15:32.007792] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.857 [2024-04-24 22:15:32.016904] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.857 [2024-04-24 22:15:32.017400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.857 [2024-04-24 22:15:32.017609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.857 [2024-04-24 22:15:32.017637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.857 [2024-04-24 22:15:32.017654] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.857 [2024-04-24 22:15:32.017948] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.857 [2024-04-24 22:15:32.018247] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.857 [2024-04-24 22:15:32.018270] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.857 [2024-04-24 22:15:32.018285] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.857 [2024-04-24 22:15:32.022845] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.857 [2024-04-24 22:15:32.031974] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.857 [2024-04-24 22:15:32.032553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.857 [2024-04-24 22:15:32.032809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.857 [2024-04-24 22:15:32.032838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.857 [2024-04-24 22:15:32.032855] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.857 [2024-04-24 22:15:32.033149] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.857 [2024-04-24 22:15:32.033458] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.857 [2024-04-24 22:15:32.033483] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.857 [2024-04-24 22:15:32.033498] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.857 [2024-04-24 22:15:32.038038] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.857 [2024-04-24 22:15:32.046868] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.857 [2024-04-24 22:15:32.047365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.857 [2024-04-24 22:15:32.047521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.857 [2024-04-24 22:15:32.047550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.857 [2024-04-24 22:15:32.047567] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.857 [2024-04-24 22:15:32.047863] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.858 [2024-04-24 22:15:32.048162] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.858 [2024-04-24 22:15:32.048185] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.858 [2024-04-24 22:15:32.048201] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.858 [2024-04-24 22:15:32.052739] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.858 [2024-04-24 22:15:32.061831] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.858 [2024-04-24 22:15:32.062353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.858 [2024-04-24 22:15:32.062535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.858 [2024-04-24 22:15:32.062564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.858 [2024-04-24 22:15:32.062581] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.858 [2024-04-24 22:15:32.062874] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.858 [2024-04-24 22:15:32.063173] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.858 [2024-04-24 22:15:32.063197] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.858 [2024-04-24 22:15:32.063212] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.858 [2024-04-24 22:15:32.067748] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.858 [2024-04-24 22:15:32.076844] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.858 [2024-04-24 22:15:32.077333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.858 [2024-04-24 22:15:32.077489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.858 [2024-04-24 22:15:32.077518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.858 [2024-04-24 22:15:32.077536] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.858 [2024-04-24 22:15:32.077830] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.858 [2024-04-24 22:15:32.078135] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.858 [2024-04-24 22:15:32.078163] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.858 [2024-04-24 22:15:32.078179] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.858 [2024-04-24 22:15:32.082791] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.858 [2024-04-24 22:15:32.091912] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.858 [2024-04-24 22:15:32.092409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.858 [2024-04-24 22:15:32.092578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.858 [2024-04-24 22:15:32.092607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.858 [2024-04-24 22:15:32.092624] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.858 [2024-04-24 22:15:32.092919] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.858 [2024-04-24 22:15:32.093217] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.858 [2024-04-24 22:15:32.093240] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.858 [2024-04-24 22:15:32.093256] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.858 [2024-04-24 22:15:32.097793] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:49.858 [2024-04-24 22:15:32.106887] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:49.858 [2024-04-24 22:15:32.107469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.858 [2024-04-24 22:15:32.107670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:49.858 [2024-04-24 22:15:32.107698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:49.858 [2024-04-24 22:15:32.107715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:49.858 [2024-04-24 22:15:32.108009] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:49.858 [2024-04-24 22:15:32.108308] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.858 [2024-04-24 22:15:32.108331] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:49.858 [2024-04-24 22:15:32.108345] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.118 [2024-04-24 22:15:32.112884] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.118 [2024-04-24 22:15:32.122013] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.118 [2024-04-24 22:15:32.122524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.118 [2024-04-24 22:15:32.122802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.118 [2024-04-24 22:15:32.122830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.118 [2024-04-24 22:15:32.122847] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.118 [2024-04-24 22:15:32.123141] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.118 [2024-04-24 22:15:32.123451] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.118 [2024-04-24 22:15:32.123476] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.118 [2024-04-24 22:15:32.123491] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.118 [2024-04-24 22:15:32.128022] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.118 [2024-04-24 22:15:32.137126] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.118 [2024-04-24 22:15:32.137683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.118 [2024-04-24 22:15:32.137933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.118 [2024-04-24 22:15:32.137962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.118 [2024-04-24 22:15:32.137978] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.118 [2024-04-24 22:15:32.138272] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.118 [2024-04-24 22:15:32.138581] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.118 [2024-04-24 22:15:32.138605] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.118 [2024-04-24 22:15:32.138619] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.118 [2024-04-24 22:15:32.143148] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.118 [2024-04-24 22:15:32.151980] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.118 [2024-04-24 22:15:32.152557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.118 [2024-04-24 22:15:32.152817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.118 [2024-04-24 22:15:32.152845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.118 [2024-04-24 22:15:32.152862] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.118 [2024-04-24 22:15:32.153157] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.118 [2024-04-24 22:15:32.153469] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.118 [2024-04-24 22:15:32.153493] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.118 [2024-04-24 22:15:32.153508] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.118 [2024-04-24 22:15:32.158043] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.118 [2024-04-24 22:15:32.166870] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.118 [2024-04-24 22:15:32.167406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.118 [2024-04-24 22:15:32.167608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.118 [2024-04-24 22:15:32.167641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.118 [2024-04-24 22:15:32.167659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.118 [2024-04-24 22:15:32.167953] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.118 [2024-04-24 22:15:32.168252] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.118 [2024-04-24 22:15:32.168276] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.118 [2024-04-24 22:15:32.168290] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.118 [2024-04-24 22:15:32.172835] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.118 [2024-04-24 22:15:32.181925] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.118 [2024-04-24 22:15:32.182578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.118 [2024-04-24 22:15:32.182850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.118 [2024-04-24 22:15:32.182881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.118 [2024-04-24 22:15:32.182900] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.118 [2024-04-24 22:15:32.183200] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.118 [2024-04-24 22:15:32.183520] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.118 [2024-04-24 22:15:32.183545] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.118 [2024-04-24 22:15:32.183560] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.118 [2024-04-24 22:15:32.188096] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.118 [2024-04-24 22:15:32.196923] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.118 [2024-04-24 22:15:32.197489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.118 [2024-04-24 22:15:32.197725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.118 [2024-04-24 22:15:32.197753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.118 [2024-04-24 22:15:32.197770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.118 [2024-04-24 22:15:32.198064] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.118 [2024-04-24 22:15:32.198363] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.118 [2024-04-24 22:15:32.198386] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.118 [2024-04-24 22:15:32.198416] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.118 [2024-04-24 22:15:32.202951] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.118 [2024-04-24 22:15:32.211776] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.118 [2024-04-24 22:15:32.212324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.118 [2024-04-24 22:15:32.212572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.118 [2024-04-24 22:15:32.212601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.118 [2024-04-24 22:15:32.212624] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.118 [2024-04-24 22:15:32.212919] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.118 [2024-04-24 22:15:32.213218] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.118 [2024-04-24 22:15:32.213242] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.118 [2024-04-24 22:15:32.213257] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.118 [2024-04-24 22:15:32.217800] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.118 [2024-04-24 22:15:32.226631] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.118 [2024-04-24 22:15:32.227181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.118 [2024-04-24 22:15:32.227389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.118 [2024-04-24 22:15:32.227430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.118 [2024-04-24 22:15:32.227447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.118 [2024-04-24 22:15:32.227741] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.118 [2024-04-24 22:15:32.228039] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.118 [2024-04-24 22:15:32.228062] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.118 [2024-04-24 22:15:32.228077] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.118 [2024-04-24 22:15:32.232617] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.118 [2024-04-24 22:15:32.241709] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.118 [2024-04-24 22:15:32.242278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.118 [2024-04-24 22:15:32.242517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.118 [2024-04-24 22:15:32.242546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.118 [2024-04-24 22:15:32.242563] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.118 [2024-04-24 22:15:32.242858] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.118 [2024-04-24 22:15:32.243157] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.118 [2024-04-24 22:15:32.243180] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.118 [2024-04-24 22:15:32.243195] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.119 [2024-04-24 22:15:32.247738] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.119 [2024-04-24 22:15:32.256573] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.119 [2024-04-24 22:15:32.257053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.119 [2024-04-24 22:15:32.257252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.119 [2024-04-24 22:15:32.257280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.119 [2024-04-24 22:15:32.257297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.119 [2024-04-24 22:15:32.257626] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.119 [2024-04-24 22:15:32.257925] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.119 [2024-04-24 22:15:32.257948] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.119 [2024-04-24 22:15:32.257963] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.119 [2024-04-24 22:15:32.262505] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.119 [2024-04-24 22:15:32.271591] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.119 [2024-04-24 22:15:32.272242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.119 [2024-04-24 22:15:32.272476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.119 [2024-04-24 22:15:32.272508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.119 [2024-04-24 22:15:32.272526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.119 [2024-04-24 22:15:32.272827] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.119 [2024-04-24 22:15:32.273126] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.119 [2024-04-24 22:15:32.273149] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.119 [2024-04-24 22:15:32.273164] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.119 [2024-04-24 22:15:32.277712] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.119 [2024-04-24 22:15:32.286546] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.119 [2024-04-24 22:15:32.287050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.119 [2024-04-24 22:15:32.287335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.119 [2024-04-24 22:15:32.287363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.119 [2024-04-24 22:15:32.287380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.119 [2024-04-24 22:15:32.287686] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.119 [2024-04-24 22:15:32.287986] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.119 [2024-04-24 22:15:32.288009] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.119 [2024-04-24 22:15:32.288024] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.119 [2024-04-24 22:15:32.292568] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.119 [2024-04-24 22:15:32.301653] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.119 [2024-04-24 22:15:32.302218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.119 [2024-04-24 22:15:32.302429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.119 [2024-04-24 22:15:32.302457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.119 [2024-04-24 22:15:32.302474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.119 [2024-04-24 22:15:32.302769] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.119 [2024-04-24 22:15:32.303074] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.119 [2024-04-24 22:15:32.303098] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.119 [2024-04-24 22:15:32.303113] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.119 [2024-04-24 22:15:32.307658] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.119 [2024-04-24 22:15:32.316749] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.119 [2024-04-24 22:15:32.317362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.119 [2024-04-24 22:15:32.317647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.119 [2024-04-24 22:15:32.317679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.119 [2024-04-24 22:15:32.317697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.119 [2024-04-24 22:15:32.317996] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.119 [2024-04-24 22:15:32.318296] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.119 [2024-04-24 22:15:32.318319] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.119 [2024-04-24 22:15:32.318334] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.119 [2024-04-24 22:15:32.322880] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.119 [2024-04-24 22:15:32.331705] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.119 [2024-04-24 22:15:32.332192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.119 [2024-04-24 22:15:32.332387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.119 [2024-04-24 22:15:32.332429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.119 [2024-04-24 22:15:32.332446] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.119 [2024-04-24 22:15:32.332741] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.119 [2024-04-24 22:15:32.333050] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.119 [2024-04-24 22:15:32.333074] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.119 [2024-04-24 22:15:32.333089] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.119 [2024-04-24 22:15:32.337700] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.119 [2024-04-24 22:15:32.346568] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.119 [2024-04-24 22:15:32.347134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.119 [2024-04-24 22:15:32.347376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.119 [2024-04-24 22:15:32.347414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.119 [2024-04-24 22:15:32.347434] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.119 [2024-04-24 22:15:32.347729] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.119 [2024-04-24 22:15:32.348028] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.119 [2024-04-24 22:15:32.348057] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.119 [2024-04-24 22:15:32.348073] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.119 [2024-04-24 22:15:32.352616] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.119 [2024-04-24 22:15:32.361448] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.119 [2024-04-24 22:15:32.361977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.119 [2024-04-24 22:15:32.362182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.119 [2024-04-24 22:15:32.362211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.119 [2024-04-24 22:15:32.362228] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.119 [2024-04-24 22:15:32.362533] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.119 [2024-04-24 22:15:32.362834] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.119 [2024-04-24 22:15:32.362857] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.119 [2024-04-24 22:15:32.362872] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.119 [2024-04-24 22:15:32.367410] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.379 [2024-04-24 22:15:32.376524] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.379 [2024-04-24 22:15:32.377041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.379 [2024-04-24 22:15:32.377206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.379 [2024-04-24 22:15:32.377234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.379 [2024-04-24 22:15:32.377251] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.379 [2024-04-24 22:15:32.377556] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.379 [2024-04-24 22:15:32.377855] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.379 [2024-04-24 22:15:32.377879] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.379 [2024-04-24 22:15:32.377893] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.379 [2024-04-24 22:15:32.382436] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.379 [2024-04-24 22:15:32.391559] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.379 [2024-04-24 22:15:32.392103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.379 [2024-04-24 22:15:32.392291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.379 [2024-04-24 22:15:32.392319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.380 [2024-04-24 22:15:32.392335] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.380 [2024-04-24 22:15:32.392639] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.380 [2024-04-24 22:15:32.392939] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.380 [2024-04-24 22:15:32.392962] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.380 [2024-04-24 22:15:32.392983] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.380 [2024-04-24 22:15:32.397527] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.380 [2024-04-24 22:15:32.406649] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.380 [2024-04-24 22:15:32.407111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.380 [2024-04-24 22:15:32.407304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.380 [2024-04-24 22:15:32.407332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.380 [2024-04-24 22:15:32.407349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.380 [2024-04-24 22:15:32.407652] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.380 [2024-04-24 22:15:32.407952] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.380 [2024-04-24 22:15:32.407975] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.380 [2024-04-24 22:15:32.407990] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.380 [2024-04-24 22:15:32.412540] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.380 [2024-04-24 22:15:32.421659] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.380 [2024-04-24 22:15:32.422310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.380 [2024-04-24 22:15:32.422519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.380 [2024-04-24 22:15:32.422548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.380 [2024-04-24 22:15:32.422565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.380 [2024-04-24 22:15:32.422860] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.380 [2024-04-24 22:15:32.423159] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.380 [2024-04-24 22:15:32.423181] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.380 [2024-04-24 22:15:32.423197] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.380 [2024-04-24 22:15:32.427751] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.380 [2024-04-24 22:15:32.436605] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.380 [2024-04-24 22:15:32.437174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.380 [2024-04-24 22:15:32.437382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.380 [2024-04-24 22:15:32.437428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.380 [2024-04-24 22:15:32.437446] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.380 [2024-04-24 22:15:32.437740] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.380 [2024-04-24 22:15:32.438039] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.380 [2024-04-24 22:15:32.438062] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.380 [2024-04-24 22:15:32.438077] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.380 [2024-04-24 22:15:32.442631] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.380 [2024-04-24 22:15:32.451477] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.380 [2024-04-24 22:15:32.451972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.380 [2024-04-24 22:15:32.452164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.380 [2024-04-24 22:15:32.452192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.380 [2024-04-24 22:15:32.452209] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.380 [2024-04-24 22:15:32.452513] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.380 [2024-04-24 22:15:32.452813] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.380 [2024-04-24 22:15:32.452836] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.380 [2024-04-24 22:15:32.452851] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.380 [2024-04-24 22:15:32.457382] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.380 [2024-04-24 22:15:32.466498] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.380 [2024-04-24 22:15:32.466987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.380 [2024-04-24 22:15:32.467114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.380 [2024-04-24 22:15:32.467142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.380 [2024-04-24 22:15:32.467159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.380 [2024-04-24 22:15:32.467463] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.380 [2024-04-24 22:15:32.467762] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.380 [2024-04-24 22:15:32.467785] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.380 [2024-04-24 22:15:32.467800] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.380 [2024-04-24 22:15:32.472340] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.380 [2024-04-24 22:15:32.481445] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.380 [2024-04-24 22:15:32.481901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.380 [2024-04-24 22:15:32.482068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.380 [2024-04-24 22:15:32.482096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.380 [2024-04-24 22:15:32.482113] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.380 [2024-04-24 22:15:32.482418] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.380 [2024-04-24 22:15:32.482725] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.380 [2024-04-24 22:15:32.482748] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.380 [2024-04-24 22:15:32.482763] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.380 [2024-04-24 22:15:32.487312] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.380 [2024-04-24 22:15:32.496562] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.380 [2024-04-24 22:15:32.497047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.380 [2024-04-24 22:15:32.497222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.380 [2024-04-24 22:15:32.497251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.380 [2024-04-24 22:15:32.497268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.380 [2024-04-24 22:15:32.497571] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.380 [2024-04-24 22:15:32.497870] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.380 [2024-04-24 22:15:32.497895] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.380 [2024-04-24 22:15:32.497910] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.380 [2024-04-24 22:15:32.502454] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.380 [2024-04-24 22:15:32.511558] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.380 [2024-04-24 22:15:32.512053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.380 [2024-04-24 22:15:32.512237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.380 [2024-04-24 22:15:32.512264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.380 [2024-04-24 22:15:32.512282] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.380 [2024-04-24 22:15:32.512585] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.380 [2024-04-24 22:15:32.512886] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.380 [2024-04-24 22:15:32.512909] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.380 [2024-04-24 22:15:32.512924] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.380 [2024-04-24 22:15:32.517465] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 4034882 Killed "${NVMF_APP[@]}" "$@" 00:23:50.380 22:15:32 -- host/bdevperf.sh@36 -- # tgt_init 00:23:50.380 22:15:32 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:23:50.380 22:15:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:50.380 22:15:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:50.380 22:15:32 -- common/autotest_common.sh@10 -- # set +x 00:23:50.380 [2024-04-24 22:15:32.526573] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.380 [2024-04-24 22:15:32.527134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.380 [2024-04-24 22:15:32.527386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.381 [2024-04-24 22:15:32.527423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.381 [2024-04-24 22:15:32.527441] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.381 [2024-04-24 22:15:32.527735] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.381 [2024-04-24 22:15:32.528034] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.381 [2024-04-24 22:15:32.528057] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.381 [2024-04-24 22:15:32.528077] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.381 22:15:32 -- nvmf/common.sh@470 -- # nvmfpid=4035951 00:23:50.381 22:15:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:50.381 22:15:32 -- nvmf/common.sh@471 -- # waitforlisten 4035951 00:23:50.381 22:15:32 -- common/autotest_common.sh@817 -- # '[' -z 4035951 ']' 00:23:50.381 22:15:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.381 22:15:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:50.381 22:15:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.381 22:15:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:50.381 22:15:32 -- common/autotest_common.sh@10 -- # set +x 00:23:50.381 [2024-04-24 22:15:32.532641] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.381 [2024-04-24 22:15:32.541486] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.381 [2024-04-24 22:15:32.541996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.381 [2024-04-24 22:15:32.542174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.381 [2024-04-24 22:15:32.542202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.381 [2024-04-24 22:15:32.542218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.381 [2024-04-24 22:15:32.542523] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.381 [2024-04-24 22:15:32.542823] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.381 [2024-04-24 22:15:32.542846] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.381 [2024-04-24 22:15:32.542861] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.381 [2024-04-24 22:15:32.547389] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.381 [2024-04-24 22:15:32.556493] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.381 [2024-04-24 22:15:32.557005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.381 [2024-04-24 22:15:32.557160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.381 [2024-04-24 22:15:32.557188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.381 [2024-04-24 22:15:32.557205] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.381 [2024-04-24 22:15:32.557508] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.381 [2024-04-24 22:15:32.557809] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.381 [2024-04-24 22:15:32.557831] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.381 [2024-04-24 22:15:32.557846] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.381 [2024-04-24 22:15:32.562377] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.381 [2024-04-24 22:15:32.571479] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.381 [2024-04-24 22:15:32.571970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.381 [2024-04-24 22:15:32.572139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.381 [2024-04-24 22:15:32.572167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.381 [2024-04-24 22:15:32.572190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.381 [2024-04-24 22:15:32.572495] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.381 [2024-04-24 22:15:32.572794] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.381 [2024-04-24 22:15:32.572817] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.381 [2024-04-24 22:15:32.572832] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.381 [2024-04-24 22:15:32.577360] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.381 [2024-04-24 22:15:32.577902] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:23:50.381 [2024-04-24 22:15:32.577969] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.381 [2024-04-24 22:15:32.586473] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.381 [2024-04-24 22:15:32.586978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.381 [2024-04-24 22:15:32.587133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.381 [2024-04-24 22:15:32.587161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.381 [2024-04-24 22:15:32.587178] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.381 [2024-04-24 22:15:32.587481] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.381 [2024-04-24 22:15:32.587781] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.381 [2024-04-24 22:15:32.587807] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.381 [2024-04-24 22:15:32.587832] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.381 [2024-04-24 22:15:32.592459] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.381 [2024-04-24 22:15:32.601326] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.381 [2024-04-24 22:15:32.601860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.381 [2024-04-24 22:15:32.602022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.381 [2024-04-24 22:15:32.602050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.381 [2024-04-24 22:15:32.602066] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.381 [2024-04-24 22:15:32.602365] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.381 [2024-04-24 22:15:32.602673] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.381 [2024-04-24 22:15:32.602697] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.381 [2024-04-24 22:15:32.602712] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.381 [2024-04-24 22:15:32.607242] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.381 [2024-04-24 22:15:32.616335] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.381 [2024-04-24 22:15:32.616849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.381 [2024-04-24 22:15:32.617030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.381 [2024-04-24 22:15:32.617057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.381 [2024-04-24 22:15:32.617074] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.381 [2024-04-24 22:15:32.617368] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.381 [2024-04-24 22:15:32.617676] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.381 [2024-04-24 22:15:32.617700] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.381 [2024-04-24 22:15:32.617715] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.381 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.381 [2024-04-24 22:15:32.622241] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.381 [2024-04-24 22:15:32.631554] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.381 [2024-04-24 22:15:32.632044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.381 [2024-04-24 22:15:32.632234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.381 [2024-04-24 22:15:32.632262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.381 [2024-04-24 22:15:32.632279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.381 [2024-04-24 22:15:32.632589] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.381 [2024-04-24 22:15:32.632888] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.381 [2024-04-24 22:15:32.632911] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.381 [2024-04-24 22:15:32.632927] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.641 [2024-04-24 22:15:32.637468] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.641 [2024-04-24 22:15:32.646571] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.642 [2024-04-24 22:15:32.647068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.642 [2024-04-24 22:15:32.647260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.642 [2024-04-24 22:15:32.647287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.642 [2024-04-24 22:15:32.647304] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.642 [2024-04-24 22:15:32.647607] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.642 [2024-04-24 22:15:32.647906] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.642 [2024-04-24 22:15:32.647930] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.642 [2024-04-24 22:15:32.647945] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.642 [2024-04-24 22:15:32.652487] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.642 [2024-04-24 22:15:32.656109] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:50.642 [2024-04-24 22:15:32.661624] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.642 [2024-04-24 22:15:32.662172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.642 [2024-04-24 22:15:32.662345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.642 [2024-04-24 22:15:32.662373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.642 [2024-04-24 22:15:32.662403] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.642 [2024-04-24 22:15:32.662712] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.642 [2024-04-24 22:15:32.663013] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.642 [2024-04-24 22:15:32.663036] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.642 [2024-04-24 22:15:32.663053] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.642 [2024-04-24 22:15:32.667606] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.642 [2024-04-24 22:15:32.676738] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.642 [2024-04-24 22:15:32.677281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.642 [2024-04-24 22:15:32.677470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.642 [2024-04-24 22:15:32.677500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.642 [2024-04-24 22:15:32.677520] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.642 [2024-04-24 22:15:32.677821] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.642 [2024-04-24 22:15:32.678125] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.642 [2024-04-24 22:15:32.678149] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.642 [2024-04-24 22:15:32.678168] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.642 [2024-04-24 22:15:32.682715] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.642 [2024-04-24 22:15:32.691820] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.642 [2024-04-24 22:15:32.692317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.642 [2024-04-24 22:15:32.692537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.642 [2024-04-24 22:15:32.692567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.642 [2024-04-24 22:15:32.692584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.642 [2024-04-24 22:15:32.692880] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.642 [2024-04-24 22:15:32.693179] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.642 [2024-04-24 22:15:32.693202] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.642 [2024-04-24 22:15:32.693217] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.642 [2024-04-24 22:15:32.697766] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.642 [2024-04-24 22:15:32.706868] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.642 [2024-04-24 22:15:32.707376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.642 [2024-04-24 22:15:32.707569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.642 [2024-04-24 22:15:32.707610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.642 [2024-04-24 22:15:32.707628] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.642 [2024-04-24 22:15:32.707923] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.642 [2024-04-24 22:15:32.708222] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.642 [2024-04-24 22:15:32.708245] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.642 [2024-04-24 22:15:32.708261] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.642 [2024-04-24 22:15:32.712802] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.642 [2024-04-24 22:15:32.721906] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.642 [2024-04-24 22:15:32.722465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.642 [2024-04-24 22:15:32.722661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.642 [2024-04-24 22:15:32.722690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.642 [2024-04-24 22:15:32.722710] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.642 [2024-04-24 22:15:32.723020] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.642 [2024-04-24 22:15:32.723325] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.642 [2024-04-24 22:15:32.723348] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.642 [2024-04-24 22:15:32.723366] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.642 [2024-04-24 22:15:32.727921] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.642 [2024-04-24 22:15:32.736781] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.642 [2024-04-24 22:15:32.737320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.642 [2024-04-24 22:15:32.737493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.642 [2024-04-24 22:15:32.737523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.642 [2024-04-24 22:15:32.737543] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.642 [2024-04-24 22:15:32.737843] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.642 [2024-04-24 22:15:32.738145] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.642 [2024-04-24 22:15:32.738169] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.642 [2024-04-24 22:15:32.738187] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.642 [2024-04-24 22:15:32.742734] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.642 [2024-04-24 22:15:32.751853] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.642 [2024-04-24 22:15:32.752358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.642 [2024-04-24 22:15:32.752547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.642 [2024-04-24 22:15:32.752576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.642 [2024-04-24 22:15:32.752606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.642 [2024-04-24 22:15:32.752902] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.642 [2024-04-24 22:15:32.753200] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.642 [2024-04-24 22:15:32.753224] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.642 [2024-04-24 22:15:32.753240] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.642 [2024-04-24 22:15:32.757787] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.642 [2024-04-24 22:15:32.766886] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.642 [2024-04-24 22:15:32.767408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.642 [2024-04-24 22:15:32.767664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.642 [2024-04-24 22:15:32.767692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.642 [2024-04-24 22:15:32.767709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.642 [2024-04-24 22:15:32.768004] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.642 [2024-04-24 22:15:32.768303] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.642 [2024-04-24 22:15:32.768327] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.642 [2024-04-24 22:15:32.768342] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.642 [2024-04-24 22:15:32.772886] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.642 [2024-04-24 22:15:32.776111] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.642 [2024-04-24 22:15:32.776151] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.642 [2024-04-24 22:15:32.776168] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.642 [2024-04-24 22:15:32.776181] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.643 [2024-04-24 22:15:32.776193] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.643 [2024-04-24 22:15:32.776259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.643 [2024-04-24 22:15:32.776317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:50.643 [2024-04-24 22:15:32.776320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.643 [2024-04-24 22:15:32.781991] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.643 [2024-04-24 22:15:32.782540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.643 [2024-04-24 22:15:32.782734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.643 [2024-04-24 22:15:32.782762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.643 [2024-04-24 22:15:32.782781] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.643 [2024-04-24 22:15:32.783087] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.643 [2024-04-24 22:15:32.783389] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.643 [2024-04-24 22:15:32.783440] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.643 [2024-04-24 22:15:32.783467] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.643 [2024-04-24 22:15:32.788018] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.643 [2024-04-24 22:15:32.796875] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.643 [2024-04-24 22:15:32.797502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.643 [2024-04-24 22:15:32.797690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.643 [2024-04-24 22:15:32.797718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.643 [2024-04-24 22:15:32.797740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.643 [2024-04-24 22:15:32.798051] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.643 [2024-04-24 22:15:32.798358] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.643 [2024-04-24 22:15:32.798383] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.643 [2024-04-24 22:15:32.798410] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.643 [2024-04-24 22:15:32.802959] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.643 [2024-04-24 22:15:32.811826] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.643 [2024-04-24 22:15:32.812449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.643 [2024-04-24 22:15:32.812607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.643 [2024-04-24 22:15:32.812636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.643 [2024-04-24 22:15:32.812657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.643 [2024-04-24 22:15:32.812963] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.643 [2024-04-24 22:15:32.813269] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.643 [2024-04-24 22:15:32.813293] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.643 [2024-04-24 22:15:32.813312] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.643 [2024-04-24 22:15:32.817861] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.643 [2024-04-24 22:15:32.826990] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.643 [2024-04-24 22:15:32.827614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.643 [2024-04-24 22:15:32.827771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.643 [2024-04-24 22:15:32.827800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.643 [2024-04-24 22:15:32.827821] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.643 [2024-04-24 22:15:32.828127] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.643 [2024-04-24 22:15:32.828442] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.643 [2024-04-24 22:15:32.828468] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.643 [2024-04-24 22:15:32.828487] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.643 [2024-04-24 22:15:32.833036] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.643 [2024-04-24 22:15:32.842141] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.643 [2024-04-24 22:15:32.842737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.643 [2024-04-24 22:15:32.842921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.643 [2024-04-24 22:15:32.842950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.643 [2024-04-24 22:15:32.842971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.643 [2024-04-24 22:15:32.843274] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.643 [2024-04-24 22:15:32.843617] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.643 [2024-04-24 22:15:32.843643] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.643 [2024-04-24 22:15:32.843662] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.643 [2024-04-24 22:15:32.848286] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.643 [2024-04-24 22:15:32.857182] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.643 [2024-04-24 22:15:32.857848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.643 [2024-04-24 22:15:32.858085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.643 [2024-04-24 22:15:32.858114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.643 [2024-04-24 22:15:32.858135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.643 [2024-04-24 22:15:32.858461] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.643 [2024-04-24 22:15:32.858766] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.643 [2024-04-24 22:15:32.858791] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.643 [2024-04-24 22:15:32.858810] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.643 [2024-04-24 22:15:32.863336] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.643 [2024-04-24 22:15:32.872191] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.643 [2024-04-24 22:15:32.872658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.643 [2024-04-24 22:15:32.872867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.643 [2024-04-24 22:15:32.872895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.643 [2024-04-24 22:15:32.872912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.643 [2024-04-24 22:15:32.873207] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.643 [2024-04-24 22:15:32.873517] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.643 [2024-04-24 22:15:32.873542] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.643 [2024-04-24 22:15:32.873557] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.643 [2024-04-24 22:15:32.878081] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.643 [2024-04-24 22:15:32.887187] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.643 [2024-04-24 22:15:32.887735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.643 [2024-04-24 22:15:32.887994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.643 [2024-04-24 22:15:32.888022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.643 [2024-04-24 22:15:32.888039] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.643 [2024-04-24 22:15:32.888334] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.643 [2024-04-24 22:15:32.888642] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.643 [2024-04-24 22:15:32.888666] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.643 [2024-04-24 22:15:32.888681] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.643 [2024-04-24 22:15:32.893213] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.902 22:15:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:50.902 22:15:32 -- common/autotest_common.sh@850 -- # return 0 00:23:50.902 22:15:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:50.902 22:15:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:50.902 22:15:32 -- common/autotest_common.sh@10 -- # set +x 00:23:50.902 [2024-04-24 22:15:32.902043] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.902 [2024-04-24 22:15:32.902528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.903 [2024-04-24 22:15:32.902656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.903 [2024-04-24 22:15:32.902696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.903 [2024-04-24 22:15:32.902714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.903 [2024-04-24 22:15:32.903008] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.903 [2024-04-24 22:15:32.903307] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.903 [2024-04-24 22:15:32.903332] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.903 [2024-04-24 22:15:32.903347] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.903 [2024-04-24 22:15:32.907885] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.903 [2024-04-24 22:15:32.916986] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.903 [2024-04-24 22:15:32.917499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.903 [2024-04-24 22:15:32.917646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.903 [2024-04-24 22:15:32.917674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.903 [2024-04-24 22:15:32.917691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.903 [2024-04-24 22:15:32.917985] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.903 [2024-04-24 22:15:32.918292] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.903 [2024-04-24 22:15:32.918316] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.903 [2024-04-24 22:15:32.918331] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.903 [2024-04-24 22:15:32.922881] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.903 22:15:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:50.903 22:15:32 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:50.903 22:15:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.903 22:15:32 -- common/autotest_common.sh@10 -- # set +x 00:23:50.903 [2024-04-24 22:15:32.931224] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.903 [2024-04-24 22:15:32.931977] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.903 [2024-04-24 22:15:32.932534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.903 [2024-04-24 22:15:32.932764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.903 [2024-04-24 22:15:32.932792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.903 [2024-04-24 22:15:32.932808] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.903 [2024-04-24 22:15:32.933102] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.903 [2024-04-24 22:15:32.933410] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.903 [2024-04-24 22:15:32.933434] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.903 [2024-04-24 22:15:32.933449] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.903 22:15:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.903 22:15:32 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:50.903 22:15:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.903 [2024-04-24 22:15:32.937976] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.903 22:15:32 -- common/autotest_common.sh@10 -- # set +x 00:23:50.903 [2024-04-24 22:15:32.947061] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.903 [2024-04-24 22:15:32.947590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.903 [2024-04-24 22:15:32.947859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.903 [2024-04-24 22:15:32.947891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.903 [2024-04-24 22:15:32.947908] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.903 [2024-04-24 22:15:32.948201] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.903 [2024-04-24 22:15:32.948509] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.903 [2024-04-24 22:15:32.948541] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.903 [2024-04-24 22:15:32.948556] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.903 [2024-04-24 22:15:32.953087] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.903 [2024-04-24 22:15:32.961941] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.903 [2024-04-24 22:15:32.962613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.903 [2024-04-24 22:15:32.962877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.903 [2024-04-24 22:15:32.962908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.903 [2024-04-24 22:15:32.962930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.903 [2024-04-24 22:15:32.963253] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.903 [2024-04-24 22:15:32.963572] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.903 [2024-04-24 22:15:32.963621] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.903 [2024-04-24 22:15:32.963642] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.903 [2024-04-24 22:15:32.968186] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.903 Malloc0 00:23:50.903 22:15:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.903 22:15:32 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:50.903 22:15:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.903 22:15:32 -- common/autotest_common.sh@10 -- # set +x 00:23:50.903 [2024-04-24 22:15:32.977021] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.903 [2024-04-24 22:15:32.977578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.903 [2024-04-24 22:15:32.977842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.903 [2024-04-24 22:15:32.977870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168c3e0 with addr=10.0.0.2, port=4420 00:23:50.903 [2024-04-24 22:15:32.977888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c3e0 is same with the state(5) to be set 00:23:50.903 [2024-04-24 22:15:32.978184] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168c3e0 (9): Bad file descriptor 00:23:50.903 [2024-04-24 22:15:32.978494] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.903 [2024-04-24 22:15:32.978517] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:50.903 [2024-04-24 22:15:32.978543] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.903 22:15:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.903 22:15:32 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:50.903 22:15:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.903 22:15:32 -- common/autotest_common.sh@10 -- # set +x 00:23:50.903 [2024-04-24 22:15:32.983071] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:50.903 22:15:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.903 22:15:32 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:50.903 22:15:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.903 22:15:32 -- common/autotest_common.sh@10 -- # set +x 00:23:50.903 [2024-04-24 22:15:32.991464] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:50.903 [2024-04-24 22:15:32.991774] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.903 [2024-04-24 22:15:32.991897] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:50.903 22:15:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.903 22:15:32 -- host/bdevperf.sh@38 -- # wait 4035168 00:23:50.903 [2024-04-24 22:15:33.132659] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:00.876 00:24:00.876 Latency(us) 00:24:00.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.876 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:00.876 Verification LBA range: start 0x0 length 0x4000 00:24:00.876 Nvme1n1 : 15.01 5985.85 23.38 6821.22 0.00 9963.57 1080.13 20388.98 00:24:00.877 =================================================================================================================== 00:24:00.877 Total : 5985.85 23.38 6821.22 0.00 9963.57 1080.13 20388.98 00:24:00.877 22:15:42 -- host/bdevperf.sh@39 -- # sync 00:24:00.877 22:15:42 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:00.877 22:15:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.877 22:15:42 -- common/autotest_common.sh@10 -- # set +x 00:24:00.877 22:15:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.877 22:15:42 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:00.877 22:15:42 -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:00.877 22:15:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:00.877 22:15:42 -- nvmf/common.sh@117 -- # sync 00:24:00.877 22:15:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:00.877 22:15:42 -- nvmf/common.sh@120 -- # set +e 00:24:00.877 22:15:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:00.877 22:15:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:00.877 rmmod nvme_tcp 00:24:00.877 rmmod nvme_fabrics 00:24:00.877 rmmod nvme_keyring 00:24:00.877 22:15:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:00.877 22:15:42 -- nvmf/common.sh@124 -- # set -e 00:24:00.877 22:15:42 -- nvmf/common.sh@125 -- # return 0 00:24:00.877 22:15:42 -- nvmf/common.sh@478 -- # '[' -n 4035951 ']' 00:24:00.877 22:15:42 -- nvmf/common.sh@479 -- # killprocess 4035951 00:24:00.877 22:15:42 -- common/autotest_common.sh@936 -- # '[' -z 4035951 ']' 00:24:00.877 22:15:42 -- common/autotest_common.sh@940 -- # kill -0 4035951 00:24:00.877 22:15:42 -- common/autotest_common.sh@941 -- # uname 00:24:00.877 22:15:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:00.877 22:15:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4035951 00:24:00.877 22:15:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:00.877 22:15:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:00.877 22:15:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4035951' 00:24:00.877 killing process with pid 4035951 00:24:00.877 22:15:42 -- common/autotest_common.sh@955 -- # kill 4035951 00:24:00.877 [2024-04-24 22:15:42.552074] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:00.877 22:15:42 -- common/autotest_common.sh@960 -- # wait 4035951 00:24:00.877 22:15:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:00.877 22:15:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:00.877 22:15:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:00.877 22:15:42 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:00.877 22:15:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:00.877 22:15:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.877 22:15:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:00.877 22:15:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.786 22:15:44 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:02.786 00:24:02.786 real 0m23.168s 00:24:02.786 user 1m1.726s 00:24:02.786 sys 0m4.653s 00:24:02.786 22:15:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:02.786 22:15:44 -- common/autotest_common.sh@10 -- # set +x 00:24:02.786 ************************************ 00:24:02.786 END TEST nvmf_bdevperf 00:24:02.786 ************************************ 00:24:02.786 22:15:44 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:02.786 22:15:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:02.786 22:15:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:02.786 22:15:44 -- common/autotest_common.sh@10 -- # set +x 00:24:02.786 ************************************ 00:24:02.786 START TEST nvmf_target_disconnect 00:24:02.786 ************************************ 00:24:02.786 22:15:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:03.045 * Looking for test storage... 00:24:03.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:03.045 22:15:45 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:03.045 22:15:45 -- nvmf/common.sh@7 -- # uname -s 00:24:03.045 22:15:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:03.045 22:15:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:03.045 22:15:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:03.045 22:15:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:03.045 22:15:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:03.045 22:15:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:03.045 22:15:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:03.045 22:15:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:03.045 22:15:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:03.045 22:15:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:03.045 22:15:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:03.045 22:15:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:03.045 22:15:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:03.045 22:15:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:03.045 22:15:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:03.045 22:15:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:03.045 22:15:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:03.045 22:15:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:03.045 22:15:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:03.045 22:15:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:03.045 22:15:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.045 22:15:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.045 22:15:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.045 22:15:45 -- paths/export.sh@5 -- # export PATH 00:24:03.045 22:15:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.045 22:15:45 -- nvmf/common.sh@47 -- # : 0 00:24:03.045 22:15:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:03.045 22:15:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:03.045 22:15:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:03.045 22:15:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:03.045 22:15:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:03.045 22:15:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:03.045 22:15:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:03.045 22:15:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:03.045 22:15:45 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:03.045 22:15:45 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:03.045 22:15:45 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:03.045 22:15:45 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:24:03.045 22:15:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:03.045 22:15:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:03.045 22:15:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:03.045 22:15:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:03.045 22:15:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:03.045 22:15:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.045 22:15:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:03.046 22:15:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.046 22:15:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:03.046 22:15:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:03.046 22:15:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:03.046 22:15:45 -- common/autotest_common.sh@10 -- # set +x 00:24:05.577 22:15:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:05.577 22:15:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:05.577 22:15:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:05.577 22:15:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:05.577 22:15:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:05.577 22:15:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:05.577 22:15:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:05.577 22:15:47 -- nvmf/common.sh@295 -- # net_devs=() 00:24:05.577 22:15:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:05.577 22:15:47 -- nvmf/common.sh@296 -- # e810=() 00:24:05.577 22:15:47 -- nvmf/common.sh@296 -- # local -ga e810 00:24:05.577 22:15:47 -- nvmf/common.sh@297 -- # x722=() 00:24:05.577 22:15:47 -- nvmf/common.sh@297 -- # local -ga x722 00:24:05.577 22:15:47 -- nvmf/common.sh@298 -- # mlx=() 00:24:05.577 22:15:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:05.578 22:15:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:05.578 22:15:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:05.578 22:15:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:05.578 22:15:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:05.578 22:15:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:05.578 22:15:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:05.578 22:15:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:05.578 22:15:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:05.578 22:15:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:05.578 22:15:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:05.578 22:15:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:05.578 22:15:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:05.578 22:15:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:05.578 22:15:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:05.578 22:15:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:05.578 22:15:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:05.578 22:15:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:05.578 22:15:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:05.578 22:15:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:05.578 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:05.578 22:15:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:05.578 22:15:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:05.578 22:15:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.578 22:15:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.578 22:15:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:05.578 22:15:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:05.578 22:15:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:05.578 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:05.578 22:15:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:05.578 22:15:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:05.578 22:15:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.578 22:15:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.578 22:15:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:05.578 22:15:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:05.578 22:15:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:05.578 22:15:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:05.578 22:15:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:05.578 22:15:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.578 22:15:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:05.578 22:15:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.578 22:15:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:05.578 Found net devices under 0000:84:00.0: cvl_0_0 00:24:05.578 22:15:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.578 22:15:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:05.578 22:15:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.578 22:15:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:05.578 22:15:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.578 22:15:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:05.578 Found net devices under 0000:84:00.1: cvl_0_1 00:24:05.578 22:15:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.578 22:15:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:05.578 22:15:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:05.578 22:15:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:05.578 22:15:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:05.578 22:15:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:05.578 22:15:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.578 22:15:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:05.578 22:15:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:05.578 22:15:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:05.578 22:15:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:05.578 22:15:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:05.578 22:15:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:05.578 22:15:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:05.578 22:15:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.578 22:15:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:05.578 22:15:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:05.578 22:15:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:05.578 22:15:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:05.578 22:15:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:05.578 22:15:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:05.578 22:15:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:05.578 22:15:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:05.578 22:15:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:05.578 22:15:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:05.578 22:15:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:05.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:24:05.578 00:24:05.578 --- 10.0.0.2 ping statistics --- 00:24:05.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.578 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:24:05.578 22:15:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:05.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:24:05.578 00:24:05.578 --- 10.0.0.1 ping statistics --- 00:24:05.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.578 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:24:05.578 22:15:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.578 22:15:47 -- nvmf/common.sh@411 -- # return 0 00:24:05.578 22:15:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:05.578 22:15:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:05.578 22:15:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:05.578 22:15:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:05.578 22:15:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:05.578 22:15:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:05.578 22:15:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:05.578 22:15:47 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:24:05.578 22:15:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:05.578 22:15:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:05.578 22:15:47 -- common/autotest_common.sh@10 -- # set +x 00:24:05.578 ************************************ 00:24:05.578 START TEST nvmf_target_disconnect_tc1 00:24:05.578 ************************************ 00:24:05.578 22:15:47 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:24:05.578 22:15:47 -- host/target_disconnect.sh@32 -- # set +e 00:24:05.578 22:15:47 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:05.836 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.836 [2024-04-24 22:15:47.909660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.836 [2024-04-24 22:15:47.909985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:05.836 [2024-04-24 22:15:47.910025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x518390 with addr=10.0.0.2, port=4420 00:24:05.836 [2024-04-24 22:15:47.910067] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:05.836 [2024-04-24 22:15:47.910091] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:05.836 [2024-04-24 22:15:47.910106] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:24:05.836 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:24:05.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:24:05.836 Initializing NVMe Controllers 00:24:05.836 22:15:47 -- host/target_disconnect.sh@33 -- # trap - ERR 00:24:05.836 22:15:47 -- host/target_disconnect.sh@33 -- # print_backtrace 00:24:05.836 22:15:47 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:24:05.836 22:15:47 -- common/autotest_common.sh@1139 -- # return 0 00:24:05.836 22:15:47 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:24:05.836 22:15:47 -- host/target_disconnect.sh@41 -- # set -e 00:24:05.836 00:24:05.836 real 0m0.116s 00:24:05.836 user 0m0.048s 00:24:05.836 sys 0m0.068s 00:24:05.836 22:15:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:05.837 22:15:47 -- common/autotest_common.sh@10 -- # set +x 00:24:05.837 ************************************ 00:24:05.837 END TEST nvmf_target_disconnect_tc1 00:24:05.837 ************************************ 00:24:05.837 22:15:47 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:24:05.837 22:15:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:05.837 22:15:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:05.837 22:15:47 -- common/autotest_common.sh@10 -- # set +x 00:24:05.837 ************************************ 00:24:05.837 START TEST nvmf_target_disconnect_tc2 00:24:05.837 ************************************ 00:24:05.837 22:15:48 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:24:05.837 22:15:48 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:24:05.837 22:15:48 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:05.837 22:15:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:05.837 22:15:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:05.837 22:15:48 -- common/autotest_common.sh@10 -- # set +x 00:24:05.837 22:15:48 -- nvmf/common.sh@470 -- # nvmfpid=4039140 00:24:05.837 22:15:48 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:05.837 22:15:48 -- nvmf/common.sh@471 -- # waitforlisten 4039140 00:24:05.837 22:15:48 -- common/autotest_common.sh@817 -- # '[' -z 4039140 ']' 00:24:05.837 22:15:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.837 22:15:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:05.837 22:15:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.837 22:15:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:05.837 22:15:48 -- common/autotest_common.sh@10 -- # set +x 00:24:06.095 [2024-04-24 22:15:48.137318] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:24:06.095 [2024-04-24 22:15:48.137421] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:06.095 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.095 [2024-04-24 22:15:48.215055] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:06.095 [2024-04-24 22:15:48.336397] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.095 [2024-04-24 22:15:48.336468] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.095 [2024-04-24 22:15:48.336494] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:06.095 [2024-04-24 22:15:48.336515] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:06.095 [2024-04-24 22:15:48.336532] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.095 [2024-04-24 22:15:48.336629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:24:06.095 [2024-04-24 22:15:48.336688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:24:06.095 [2024-04-24 22:15:48.336758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:06.095 [2024-04-24 22:15:48.336747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:24:06.353 22:15:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:06.353 22:15:48 -- common/autotest_common.sh@850 -- # return 0 00:24:06.353 22:15:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:06.353 22:15:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:06.353 22:15:48 -- common/autotest_common.sh@10 -- # set +x 00:24:06.353 22:15:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.353 22:15:48 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:06.353 22:15:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.353 22:15:48 -- common/autotest_common.sh@10 -- # set +x 00:24:06.353 Malloc0 00:24:06.353 22:15:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.353 22:15:48 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:06.353 22:15:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.353 22:15:48 -- common/autotest_common.sh@10 -- # set +x 00:24:06.353 [2024-04-24 22:15:48.528155] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.353 22:15:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.353 22:15:48 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:06.353 22:15:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.353 22:15:48 -- common/autotest_common.sh@10 -- # set +x 00:24:06.353 22:15:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.353 22:15:48 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:06.353 22:15:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.353 22:15:48 -- common/autotest_common.sh@10 -- # set +x 00:24:06.353 22:15:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.353 22:15:48 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:06.353 22:15:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.353 22:15:48 -- common/autotest_common.sh@10 -- # set +x 00:24:06.353 [2024-04-24 22:15:48.556145] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:06.353 [2024-04-24 22:15:48.556503] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:06.353 22:15:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.353 22:15:48 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:06.353 22:15:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.353 22:15:48 -- common/autotest_common.sh@10 -- # set +x 00:24:06.353 22:15:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.353 22:15:48 -- host/target_disconnect.sh@50 -- # reconnectpid=4039190 00:24:06.353 22:15:48 -- host/target_disconnect.sh@52 -- # sleep 2 00:24:06.353 22:15:48 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:06.611 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.524 22:15:50 -- host/target_disconnect.sh@53 -- # kill -9 4039140 00:24:08.524 22:15:50 -- host/target_disconnect.sh@55 -- # sleep 2 00:24:08.524 Read completed with error (sct=0, sc=8) 00:24:08.524 starting I/O failed 00:24:08.524 Read completed with error (sct=0, sc=8) 00:24:08.524 starting I/O failed 00:24:08.524 Write completed with error (sct=0, sc=8) 00:24:08.524 starting I/O failed 00:24:08.524 Read completed with error (sct=0, sc=8) 00:24:08.524 starting I/O failed 00:24:08.524 Read completed with error (sct=0, sc=8) 00:24:08.524 starting I/O failed 00:24:08.524 Write completed with error (sct=0, sc=8) 00:24:08.524 starting I/O failed 00:24:08.524 Read completed with error (sct=0, sc=8) 00:24:08.524 starting I/O failed 00:24:08.524 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Write completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Write completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Write completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Write completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Write completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Write completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Write completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Write completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Write completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Write completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Write completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Write completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Write completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Write completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Write completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 [2024-04-24 22:15:50.581629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Write completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Write completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Write completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Write completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Write completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Write completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Write completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Write completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 [2024-04-24 22:15:50.582008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Read completed with error (sct=0, sc=8) 00:24:08.525 starting I/O failed 00:24:08.525 Write completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Write completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Write completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Write completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Write completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Write completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Write completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Write completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 [2024-04-24 22:15:50.582354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Write completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Write completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Write completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Write completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Write completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Write completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 Read completed with error (sct=0, sc=8) 00:24:08.526 starting I/O failed 00:24:08.526 [2024-04-24 22:15:50.582741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:08.526 [2024-04-24 22:15:50.583014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.526 [2024-04-24 22:15:50.583224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.526 [2024-04-24 22:15:50.583278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.526 qpair failed and we were unable to recover it. 00:24:08.526 [2024-04-24 22:15:50.583445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.526 [2024-04-24 22:15:50.583612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.526 [2024-04-24 22:15:50.583657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.526 qpair failed and we were unable to recover it. 00:24:08.526 [2024-04-24 22:15:50.583960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.526 [2024-04-24 22:15:50.584162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.526 [2024-04-24 22:15:50.584218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.526 qpair failed and we were unable to recover it. 00:24:08.526 [2024-04-24 22:15:50.584363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.526 [2024-04-24 22:15:50.584514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.526 [2024-04-24 22:15:50.584543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.526 qpair failed and we were unable to recover it. 00:24:08.526 [2024-04-24 22:15:50.584732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.526 [2024-04-24 22:15:50.584907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.526 [2024-04-24 22:15:50.584934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.526 qpair failed and we were unable to recover it. 00:24:08.526 [2024-04-24 22:15:50.585111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.526 [2024-04-24 22:15:50.585284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.585312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.527 qpair failed and we were unable to recover it. 00:24:08.527 [2024-04-24 22:15:50.585500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.585686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.585713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.527 qpair failed and we were unable to recover it. 00:24:08.527 [2024-04-24 22:15:50.585848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.586055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.586081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.527 qpair failed and we were unable to recover it. 00:24:08.527 [2024-04-24 22:15:50.586303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.586513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.586541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.527 qpair failed and we were unable to recover it. 00:24:08.527 [2024-04-24 22:15:50.586716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.586924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.586958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.527 qpair failed and we were unable to recover it. 00:24:08.527 [2024-04-24 22:15:50.587143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.587322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.587349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.527 qpair failed and we were unable to recover it. 00:24:08.527 [2024-04-24 22:15:50.587496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.587683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.587711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.527 qpair failed and we were unable to recover it. 00:24:08.527 [2024-04-24 22:15:50.587885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.588068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.588095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.527 qpair failed and we were unable to recover it. 00:24:08.527 [2024-04-24 22:15:50.588227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.588421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.588457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.527 qpair failed and we were unable to recover it. 00:24:08.527 [2024-04-24 22:15:50.588622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.588795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.588823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.527 qpair failed and we were unable to recover it. 00:24:08.527 [2024-04-24 22:15:50.589020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.589211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.589238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.527 qpair failed and we were unable to recover it. 00:24:08.527 [2024-04-24 22:15:50.589421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.589564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.589591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.527 qpair failed and we were unable to recover it. 00:24:08.527 [2024-04-24 22:15:50.589715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.589927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.589954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.527 qpair failed and we were unable to recover it. 00:24:08.527 [2024-04-24 22:15:50.590167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.590329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.590357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.527 qpair failed and we were unable to recover it. 00:24:08.527 [2024-04-24 22:15:50.590572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.590759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.590792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.527 qpair failed and we were unable to recover it. 00:24:08.527 [2024-04-24 22:15:50.592250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.592469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.592501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.527 qpair failed and we were unable to recover it. 00:24:08.527 [2024-04-24 22:15:50.592691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.592888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.592934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.527 qpair failed and we were unable to recover it. 00:24:08.527 [2024-04-24 22:15:50.593179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.593329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.593356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.527 qpair failed and we were unable to recover it. 00:24:08.527 [2024-04-24 22:15:50.593561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.593718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.593744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.527 qpair failed and we were unable to recover it. 00:24:08.527 [2024-04-24 22:15:50.593915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.594087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.527 [2024-04-24 22:15:50.594113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.527 qpair failed and we were unable to recover it. 00:24:08.527 [2024-04-24 22:15:50.594319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.594478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.594506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.528 qpair failed and we were unable to recover it. 00:24:08.528 [2024-04-24 22:15:50.594685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.594866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.594893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.528 qpair failed and we were unable to recover it. 00:24:08.528 [2024-04-24 22:15:50.595057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.595222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.595249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.528 qpair failed and we were unable to recover it. 00:24:08.528 [2024-04-24 22:15:50.595423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.595562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.595589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.528 qpair failed and we were unable to recover it. 00:24:08.528 [2024-04-24 22:15:50.595770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.595951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.595985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.528 qpair failed and we were unable to recover it. 00:24:08.528 [2024-04-24 22:15:50.596193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.596350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.596377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.528 qpair failed and we were unable to recover it. 00:24:08.528 [2024-04-24 22:15:50.596560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.596745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.596772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.528 qpair failed and we were unable to recover it. 00:24:08.528 [2024-04-24 22:15:50.596972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.597172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.597199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.528 qpair failed and we were unable to recover it. 00:24:08.528 [2024-04-24 22:15:50.597345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.597514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.597542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.528 qpair failed and we were unable to recover it. 00:24:08.528 [2024-04-24 22:15:50.597723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.597885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.597912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.528 qpair failed and we were unable to recover it. 00:24:08.528 [2024-04-24 22:15:50.598084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.598288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.598314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.528 qpair failed and we were unable to recover it. 00:24:08.528 [2024-04-24 22:15:50.598471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.598677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.598704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.528 qpair failed and we were unable to recover it. 00:24:08.528 [2024-04-24 22:15:50.598884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.599094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.599121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.528 qpair failed and we were unable to recover it. 00:24:08.528 [2024-04-24 22:15:50.599313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.599474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.599502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.528 qpair failed and we were unable to recover it. 00:24:08.528 [2024-04-24 22:15:50.599651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.599854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.599891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.528 qpair failed and we were unable to recover it. 00:24:08.528 [2024-04-24 22:15:50.600138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.600405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.600433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.528 qpair failed and we were unable to recover it. 00:24:08.528 [2024-04-24 22:15:50.600598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.600802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.600829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.528 qpair failed and we were unable to recover it. 00:24:08.528 [2024-04-24 22:15:50.601033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.601197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.601224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.528 qpair failed and we were unable to recover it. 00:24:08.528 [2024-04-24 22:15:50.601430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.601572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.601599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.528 qpair failed and we were unable to recover it. 00:24:08.528 [2024-04-24 22:15:50.601809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.601996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.528 [2024-04-24 22:15:50.602023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.528 qpair failed and we were unable to recover it. 00:24:08.529 [2024-04-24 22:15:50.602230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.602441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.602470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.529 qpair failed and we were unable to recover it. 00:24:08.529 [2024-04-24 22:15:50.602659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.602868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.602895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.529 qpair failed and we were unable to recover it. 00:24:08.529 [2024-04-24 22:15:50.603095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.603271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.603298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.529 qpair failed and we were unable to recover it. 00:24:08.529 [2024-04-24 22:15:50.603495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.603701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.603728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.529 qpair failed and we were unable to recover it. 00:24:08.529 [2024-04-24 22:15:50.603893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.604091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.604118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.529 qpair failed and we were unable to recover it. 00:24:08.529 [2024-04-24 22:15:50.604354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.604542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.604570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.529 qpair failed and we were unable to recover it. 00:24:08.529 [2024-04-24 22:15:50.604763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.604965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.604991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.529 qpair failed and we were unable to recover it. 00:24:08.529 [2024-04-24 22:15:50.605156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.605371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.605406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.529 qpair failed and we were unable to recover it. 00:24:08.529 [2024-04-24 22:15:50.605597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.605806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.605833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.529 qpair failed and we were unable to recover it. 00:24:08.529 [2024-04-24 22:15:50.606044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.606217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.606244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.529 qpair failed and we were unable to recover it. 00:24:08.529 [2024-04-24 22:15:50.606425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.606565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.606593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.529 qpair failed and we were unable to recover it. 00:24:08.529 [2024-04-24 22:15:50.606754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.606958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.606985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.529 qpair failed and we were unable to recover it. 00:24:08.529 [2024-04-24 22:15:50.607190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.607371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.607406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.529 qpair failed and we were unable to recover it. 00:24:08.529 [2024-04-24 22:15:50.607613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.607773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.607801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.529 qpair failed and we were unable to recover it. 00:24:08.529 [2024-04-24 22:15:50.607965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.608173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.608199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.529 qpair failed and we were unable to recover it. 00:24:08.529 [2024-04-24 22:15:50.608373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.608581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.608609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.529 qpair failed and we were unable to recover it. 00:24:08.529 [2024-04-24 22:15:50.608834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.609022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.609049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.529 qpair failed and we were unable to recover it. 00:24:08.529 [2024-04-24 22:15:50.609225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.609359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.609386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.529 qpair failed and we were unable to recover it. 00:24:08.529 [2024-04-24 22:15:50.609585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.609742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.609769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.529 qpair failed and we were unable to recover it. 00:24:08.529 [2024-04-24 22:15:50.609936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.529 [2024-04-24 22:15:50.610153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.610180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.530 qpair failed and we were unable to recover it. 00:24:08.530 [2024-04-24 22:15:50.610344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.610526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.610555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.530 qpair failed and we were unable to recover it. 00:24:08.530 [2024-04-24 22:15:50.610723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.610927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.610954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.530 qpair failed and we were unable to recover it. 00:24:08.530 [2024-04-24 22:15:50.611138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.611297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.611324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.530 qpair failed and we were unable to recover it. 00:24:08.530 [2024-04-24 22:15:50.611523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.611696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.611723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.530 qpair failed and we were unable to recover it. 00:24:08.530 [2024-04-24 22:15:50.611933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.612109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.612136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.530 qpair failed and we were unable to recover it. 00:24:08.530 [2024-04-24 22:15:50.612309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.612507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.612535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.530 qpair failed and we were unable to recover it. 00:24:08.530 [2024-04-24 22:15:50.612734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.612933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.612959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.530 qpair failed and we were unable to recover it. 00:24:08.530 [2024-04-24 22:15:50.613179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.613365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.613391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.530 qpair failed and we were unable to recover it. 00:24:08.530 [2024-04-24 22:15:50.613616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.613813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.613840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.530 qpair failed and we were unable to recover it. 00:24:08.530 [2024-04-24 22:15:50.614012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.614218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.614245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.530 qpair failed and we were unable to recover it. 00:24:08.530 [2024-04-24 22:15:50.614417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.614579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.614607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.530 qpair failed and we were unable to recover it. 00:24:08.530 [2024-04-24 22:15:50.614808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.615018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.615044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.530 qpair failed and we were unable to recover it. 00:24:08.530 [2024-04-24 22:15:50.615204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.615409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.615437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.530 qpair failed and we were unable to recover it. 00:24:08.530 [2024-04-24 22:15:50.615624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.615797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.615823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.530 qpair failed and we were unable to recover it. 00:24:08.530 [2024-04-24 22:15:50.615995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.616195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.616222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.530 qpair failed and we were unable to recover it. 00:24:08.530 [2024-04-24 22:15:50.616461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.616623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.616650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.530 qpair failed and we were unable to recover it. 00:24:08.530 [2024-04-24 22:15:50.616830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.616993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.617020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.530 qpair failed and we were unable to recover it. 00:24:08.530 [2024-04-24 22:15:50.617225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.617407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.617435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.530 qpair failed and we were unable to recover it. 00:24:08.530 [2024-04-24 22:15:50.617636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.617799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.530 [2024-04-24 22:15:50.617826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.530 qpair failed and we were unable to recover it. 00:24:08.530 [2024-04-24 22:15:50.618002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.531 [2024-04-24 22:15:50.618164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.531 [2024-04-24 22:15:50.618191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.531 qpair failed and we were unable to recover it. 00:24:08.531 [2024-04-24 22:15:50.618319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.531 [2024-04-24 22:15:50.618517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.531 [2024-04-24 22:15:50.618545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.531 qpair failed and we were unable to recover it. 00:24:08.531 [2024-04-24 22:15:50.618742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.531 [2024-04-24 22:15:50.618909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.531 [2024-04-24 22:15:50.618936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.531 qpair failed and we were unable to recover it. 00:24:08.531 [2024-04-24 22:15:50.619095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.531 [2024-04-24 22:15:50.619275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.531 [2024-04-24 22:15:50.619301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.531 qpair failed and we were unable to recover it. 00:24:08.531 [2024-04-24 22:15:50.619510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.531 [2024-04-24 22:15:50.619708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.531 [2024-04-24 22:15:50.619735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.531 qpair failed and we were unable to recover it. 00:24:08.531 [2024-04-24 22:15:50.619931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.531 [2024-04-24 22:15:50.620102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.531 [2024-04-24 22:15:50.620129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.531 qpair failed and we were unable to recover it. 00:24:08.531 [2024-04-24 22:15:50.620346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.531 [2024-04-24 22:15:50.620551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.531 [2024-04-24 22:15:50.620590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.531 qpair failed and we were unable to recover it. 00:24:08.531 [2024-04-24 22:15:50.620800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.531 [2024-04-24 22:15:50.620995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.531 [2024-04-24 22:15:50.621022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.531 qpair failed and we were unable to recover it. 00:24:08.531 [2024-04-24 22:15:50.621205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.531 [2024-04-24 22:15:50.621344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.531 [2024-04-24 22:15:50.621371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.531 qpair failed and we were unable to recover it. 00:24:08.531 [2024-04-24 22:15:50.621593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.531 [2024-04-24 22:15:50.621773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.531 [2024-04-24 22:15:50.621800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.531 qpair failed and we were unable to recover it. 00:24:08.531 [2024-04-24 22:15:50.621963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.531 [2024-04-24 22:15:50.622132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.531 [2024-04-24 22:15:50.622158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.531 qpair failed and we were unable to recover it. 00:24:08.531 [2024-04-24 22:15:50.622340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.531 [2024-04-24 22:15:50.622476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.531 [2024-04-24 22:15:50.622503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.531 qpair failed and we were unable to recover it. 00:24:08.531 [2024-04-24 22:15:50.622729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.531 [2024-04-24 22:15:50.622908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.531 [2024-04-24 22:15:50.622934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.531 qpair failed and we were unable to recover it. 00:24:08.531 [2024-04-24 22:15:50.623132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.531 [2024-04-24 22:15:50.623301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.531 [2024-04-24 22:15:50.623327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.532 qpair failed and we were unable to recover it. 00:24:08.532 [2024-04-24 22:15:50.623493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.623630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.623657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.532 qpair failed and we were unable to recover it. 00:24:08.532 [2024-04-24 22:15:50.623876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.624095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.624122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.532 qpair failed and we were unable to recover it. 00:24:08.532 [2024-04-24 22:15:50.624262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.624461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.624489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.532 qpair failed and we were unable to recover it. 00:24:08.532 [2024-04-24 22:15:50.624674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.624884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.624911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.532 qpair failed and we were unable to recover it. 00:24:08.532 [2024-04-24 22:15:50.625126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.625286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.625313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.532 qpair failed and we were unable to recover it. 00:24:08.532 [2024-04-24 22:15:50.625522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.625688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.625715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.532 qpair failed and we were unable to recover it. 00:24:08.532 [2024-04-24 22:15:50.625886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.626104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.626131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.532 qpair failed and we were unable to recover it. 00:24:08.532 [2024-04-24 22:15:50.626278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.626464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.626492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.532 qpair failed and we were unable to recover it. 00:24:08.532 [2024-04-24 22:15:50.626699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.626875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.626902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.532 qpair failed and we were unable to recover it. 00:24:08.532 [2024-04-24 22:15:50.627114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.627275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.627302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.532 qpair failed and we were unable to recover it. 00:24:08.532 [2024-04-24 22:15:50.627478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.627648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.627675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.532 qpair failed and we were unable to recover it. 00:24:08.532 [2024-04-24 22:15:50.627805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.628008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.628035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.532 qpair failed and we were unable to recover it. 00:24:08.532 [2024-04-24 22:15:50.628239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.628436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.628464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.532 qpair failed and we were unable to recover it. 00:24:08.532 [2024-04-24 22:15:50.628651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.628862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.628888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.532 qpair failed and we were unable to recover it. 00:24:08.532 [2024-04-24 22:15:50.629067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.629268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.629295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.532 qpair failed and we were unable to recover it. 00:24:08.532 [2024-04-24 22:15:50.629510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.629729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.629756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.532 qpair failed and we were unable to recover it. 00:24:08.532 [2024-04-24 22:15:50.629973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.630160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.630187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.532 qpair failed and we were unable to recover it. 00:24:08.532 [2024-04-24 22:15:50.630389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.630548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.630575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.532 qpair failed and we were unable to recover it. 00:24:08.532 [2024-04-24 22:15:50.630747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.630967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.630994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.532 qpair failed and we were unable to recover it. 00:24:08.532 [2024-04-24 22:15:50.631156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.631341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.532 [2024-04-24 22:15:50.631368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.533 qpair failed and we were unable to recover it. 00:24:08.533 [2024-04-24 22:15:50.631548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.631721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.631748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.533 qpair failed and we were unable to recover it. 00:24:08.533 [2024-04-24 22:15:50.631922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.632137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.632164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.533 qpair failed and we were unable to recover it. 00:24:08.533 [2024-04-24 22:15:50.632415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.632611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.632639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.533 qpair failed and we were unable to recover it. 00:24:08.533 [2024-04-24 22:15:50.632815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.632971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.632998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.533 qpair failed and we were unable to recover it. 00:24:08.533 [2024-04-24 22:15:50.633170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.633341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.633368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.533 qpair failed and we were unable to recover it. 00:24:08.533 [2024-04-24 22:15:50.633575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.633788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.633815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.533 qpair failed and we were unable to recover it. 00:24:08.533 [2024-04-24 22:15:50.633953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.634155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.634182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.533 qpair failed and we were unable to recover it. 00:24:08.533 [2024-04-24 22:15:50.634384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.634601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.634627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.533 qpair failed and we were unable to recover it. 00:24:08.533 [2024-04-24 22:15:50.634836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.634992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.635018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.533 qpair failed and we were unable to recover it. 00:24:08.533 [2024-04-24 22:15:50.635237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.635448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.635476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.533 qpair failed and we were unable to recover it. 00:24:08.533 [2024-04-24 22:15:50.635681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.635841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.635868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.533 qpair failed and we were unable to recover it. 00:24:08.533 [2024-04-24 22:15:50.636034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.636240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.636268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.533 qpair failed and we were unable to recover it. 00:24:08.533 [2024-04-24 22:15:50.636523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.636716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.636743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.533 qpair failed and we were unable to recover it. 00:24:08.533 [2024-04-24 22:15:50.636953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.637120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.637147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.533 qpair failed and we were unable to recover it. 00:24:08.533 [2024-04-24 22:15:50.637356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.637497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.637525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.533 qpair failed and we were unable to recover it. 00:24:08.533 [2024-04-24 22:15:50.637664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.637826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.637853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.533 qpair failed and we were unable to recover it. 00:24:08.533 [2024-04-24 22:15:50.638073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.638270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.638297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.533 qpair failed and we were unable to recover it. 00:24:08.533 [2024-04-24 22:15:50.638447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.638588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.638615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.533 qpair failed and we were unable to recover it. 00:24:08.533 [2024-04-24 22:15:50.638814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.639011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.639038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.533 qpair failed and we were unable to recover it. 00:24:08.533 [2024-04-24 22:15:50.639212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.639348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.533 [2024-04-24 22:15:50.639375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.533 qpair failed and we were unable to recover it. 00:24:08.534 [2024-04-24 22:15:50.639583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.639782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.639809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.534 qpair failed and we were unable to recover it. 00:24:08.534 [2024-04-24 22:15:50.640006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.640174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.640200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.534 qpair failed and we were unable to recover it. 00:24:08.534 [2024-04-24 22:15:50.640411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.640623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.640650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.534 qpair failed and we were unable to recover it. 00:24:08.534 [2024-04-24 22:15:50.640847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.641011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.641037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.534 qpair failed and we were unable to recover it. 00:24:08.534 [2024-04-24 22:15:50.641248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.641444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.641472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.534 qpair failed and we were unable to recover it. 00:24:08.534 [2024-04-24 22:15:50.641683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.641872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.641899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.534 qpair failed and we were unable to recover it. 00:24:08.534 [2024-04-24 22:15:50.642080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.642240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.642268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.534 qpair failed and we were unable to recover it. 00:24:08.534 [2024-04-24 22:15:50.642448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.642621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.642648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.534 qpair failed and we were unable to recover it. 00:24:08.534 [2024-04-24 22:15:50.642813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.642981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.643008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.534 qpair failed and we were unable to recover it. 00:24:08.534 [2024-04-24 22:15:50.643204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.643370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.643404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.534 qpair failed and we were unable to recover it. 00:24:08.534 [2024-04-24 22:15:50.643595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.643801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.643828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.534 qpair failed and we were unable to recover it. 00:24:08.534 [2024-04-24 22:15:50.644041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.644216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.644243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.534 qpair failed and we were unable to recover it. 00:24:08.534 [2024-04-24 22:15:50.644409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.644602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.644629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.534 qpair failed and we were unable to recover it. 00:24:08.534 [2024-04-24 22:15:50.644790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.644964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.644991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.534 qpair failed and we were unable to recover it. 00:24:08.534 [2024-04-24 22:15:50.645197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.645403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.645431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.534 qpair failed and we were unable to recover it. 00:24:08.534 [2024-04-24 22:15:50.645651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.645827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.645854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.534 qpair failed and we were unable to recover it. 00:24:08.534 [2024-04-24 22:15:50.646042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.646217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.646244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.534 qpair failed and we were unable to recover it. 00:24:08.534 [2024-04-24 22:15:50.646446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.646620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.646647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.534 qpair failed and we were unable to recover it. 00:24:08.534 [2024-04-24 22:15:50.646845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.647049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.647076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.534 qpair failed and we were unable to recover it. 00:24:08.534 [2024-04-24 22:15:50.647261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.647446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.534 [2024-04-24 22:15:50.647474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.535 qpair failed and we were unable to recover it. 00:24:08.535 [2024-04-24 22:15:50.647657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.647859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.647886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.535 qpair failed and we were unable to recover it. 00:24:08.535 [2024-04-24 22:15:50.648061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.648232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.648259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.535 qpair failed and we were unable to recover it. 00:24:08.535 [2024-04-24 22:15:50.648448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.648639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.648667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.535 qpair failed and we were unable to recover it. 00:24:08.535 [2024-04-24 22:15:50.648818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.649014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.649042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.535 qpair failed and we were unable to recover it. 00:24:08.535 [2024-04-24 22:15:50.649215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.649387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.649422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.535 qpair failed and we were unable to recover it. 00:24:08.535 [2024-04-24 22:15:50.649608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.649772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.649799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.535 qpair failed and we were unable to recover it. 00:24:08.535 [2024-04-24 22:15:50.650000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.650192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.650219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.535 qpair failed and we were unable to recover it. 00:24:08.535 [2024-04-24 22:15:50.650423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.650554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.650581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.535 qpair failed and we were unable to recover it. 00:24:08.535 [2024-04-24 22:15:50.650754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.650949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.650976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.535 qpair failed and we were unable to recover it. 00:24:08.535 [2024-04-24 22:15:50.651146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.651323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.651350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.535 qpair failed and we were unable to recover it. 00:24:08.535 [2024-04-24 22:15:50.651534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.651711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.651738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.535 qpair failed and we were unable to recover it. 00:24:08.535 [2024-04-24 22:15:50.651920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.652121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.652148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.535 qpair failed and we were unable to recover it. 00:24:08.535 [2024-04-24 22:15:50.652316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.652540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.652574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.535 qpair failed and we were unable to recover it. 00:24:08.535 [2024-04-24 22:15:50.652797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.653005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.653054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.535 qpair failed and we were unable to recover it. 00:24:08.535 [2024-04-24 22:15:50.653271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.653436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.653464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.535 qpair failed and we were unable to recover it. 00:24:08.535 [2024-04-24 22:15:50.653626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.653795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.653822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.535 qpair failed and we were unable to recover it. 00:24:08.535 [2024-04-24 22:15:50.654031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.654206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.654233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.535 qpair failed and we were unable to recover it. 00:24:08.535 [2024-04-24 22:15:50.654437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.654620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.654647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.535 qpair failed and we were unable to recover it. 00:24:08.535 [2024-04-24 22:15:50.654836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.655016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.655043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.535 qpair failed and we were unable to recover it. 00:24:08.535 [2024-04-24 22:15:50.655242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.655445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.655473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.535 qpair failed and we were unable to recover it. 00:24:08.535 [2024-04-24 22:15:50.655670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.655867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.535 [2024-04-24 22:15:50.655894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.535 qpair failed and we were unable to recover it. 00:24:08.535 [2024-04-24 22:15:50.656065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.656230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.656256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.536 qpair failed and we were unable to recover it. 00:24:08.536 [2024-04-24 22:15:50.656466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.656662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.656717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.536 qpair failed and we were unable to recover it. 00:24:08.536 [2024-04-24 22:15:50.656929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.657122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.657149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.536 qpair failed and we were unable to recover it. 00:24:08.536 [2024-04-24 22:15:50.657350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.657493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.657521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.536 qpair failed and we were unable to recover it. 00:24:08.536 [2024-04-24 22:15:50.657654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.657851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.657878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.536 qpair failed and we were unable to recover it. 00:24:08.536 [2024-04-24 22:15:50.658088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.658294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.658321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.536 qpair failed and we were unable to recover it. 00:24:08.536 [2024-04-24 22:15:50.658552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.658723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.658750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.536 qpair failed and we were unable to recover it. 00:24:08.536 [2024-04-24 22:15:50.658960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.659119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.659146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.536 qpair failed and we were unable to recover it. 00:24:08.536 [2024-04-24 22:15:50.659311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.659495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.659524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.536 qpair failed and we were unable to recover it. 00:24:08.536 [2024-04-24 22:15:50.659718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.659882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.659909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.536 qpair failed and we were unable to recover it. 00:24:08.536 [2024-04-24 22:15:50.660117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.660303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.660330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.536 qpair failed and we were unable to recover it. 00:24:08.536 [2024-04-24 22:15:50.660462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.660667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.660701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.536 qpair failed and we were unable to recover it. 00:24:08.536 [2024-04-24 22:15:50.660877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.661047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.661074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.536 qpair failed and we were unable to recover it. 00:24:08.536 [2024-04-24 22:15:50.661275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.661473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.661502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.536 qpair failed and we were unable to recover it. 00:24:08.536 [2024-04-24 22:15:50.661678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.661892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.661920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.536 qpair failed and we were unable to recover it. 00:24:08.536 [2024-04-24 22:15:50.662081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.662288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.662315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.536 qpair failed and we were unable to recover it. 00:24:08.536 [2024-04-24 22:15:50.662521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.662717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.662744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.536 qpair failed and we were unable to recover it. 00:24:08.536 [2024-04-24 22:15:50.662960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.663159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.663186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.536 qpair failed and we were unable to recover it. 00:24:08.536 [2024-04-24 22:15:50.663359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.663573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.663602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.536 qpair failed and we were unable to recover it. 00:24:08.536 [2024-04-24 22:15:50.663848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.664021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.536 [2024-04-24 22:15:50.664048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.536 qpair failed and we were unable to recover it. 00:24:08.536 [2024-04-24 22:15:50.664258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.664467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.664495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.537 qpair failed and we were unable to recover it. 00:24:08.537 [2024-04-24 22:15:50.664702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.664834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.664866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.537 qpair failed and we were unable to recover it. 00:24:08.537 [2024-04-24 22:15:50.665027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.665152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.665179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.537 qpair failed and we were unable to recover it. 00:24:08.537 [2024-04-24 22:15:50.665341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.665540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.665567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.537 qpair failed and we were unable to recover it. 00:24:08.537 [2024-04-24 22:15:50.665741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.665935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.665962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.537 qpair failed and we were unable to recover it. 00:24:08.537 [2024-04-24 22:15:50.666174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.666387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.666434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.537 qpair failed and we were unable to recover it. 00:24:08.537 [2024-04-24 22:15:50.666625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.666788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.666815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.537 qpair failed and we were unable to recover it. 00:24:08.537 [2024-04-24 22:15:50.666997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.667163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.667190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.537 qpair failed and we were unable to recover it. 00:24:08.537 [2024-04-24 22:15:50.667353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.667558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.667586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.537 qpair failed and we were unable to recover it. 00:24:08.537 [2024-04-24 22:15:50.667789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.667958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.667985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.537 qpair failed and we were unable to recover it. 00:24:08.537 [2024-04-24 22:15:50.668181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.668347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.668374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.537 qpair failed and we were unable to recover it. 00:24:08.537 [2024-04-24 22:15:50.668580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.668742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.668770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.537 qpair failed and we were unable to recover it. 00:24:08.537 [2024-04-24 22:15:50.668945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.669090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.669117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.537 qpair failed and we were unable to recover it. 00:24:08.537 [2024-04-24 22:15:50.669287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.669449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.669477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.537 qpair failed and we were unable to recover it. 00:24:08.537 [2024-04-24 22:15:50.669677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.669847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.669874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.537 qpair failed and we were unable to recover it. 00:24:08.537 [2024-04-24 22:15:50.670050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.670264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.670291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.537 qpair failed and we were unable to recover it. 00:24:08.537 [2024-04-24 22:15:50.670508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.670670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.670696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.537 qpair failed and we were unable to recover it. 00:24:08.537 [2024-04-24 22:15:50.670826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.670970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.670996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.537 qpair failed and we were unable to recover it. 00:24:08.537 [2024-04-24 22:15:50.671150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.671317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.671344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.537 qpair failed and we were unable to recover it. 00:24:08.537 [2024-04-24 22:15:50.671493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.671694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.671721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.537 qpair failed and we were unable to recover it. 00:24:08.537 [2024-04-24 22:15:50.671892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.672045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.537 [2024-04-24 22:15:50.672116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.537 qpair failed and we were unable to recover it. 00:24:08.538 [2024-04-24 22:15:50.672277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.538 [2024-04-24 22:15:50.672481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.538 [2024-04-24 22:15:50.672508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.538 qpair failed and we were unable to recover it. 00:24:08.538 [2024-04-24 22:15:50.672717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.538 [2024-04-24 22:15:50.672886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.538 [2024-04-24 22:15:50.672912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.538 qpair failed and we were unable to recover it. 00:24:08.538 [2024-04-24 22:15:50.673112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.538 [2024-04-24 22:15:50.673301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.538 [2024-04-24 22:15:50.673328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.538 qpair failed and we were unable to recover it. 00:24:08.538 [2024-04-24 22:15:50.673483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.538 [2024-04-24 22:15:50.673648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.538 [2024-04-24 22:15:50.673713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.538 qpair failed and we were unable to recover it. 00:24:08.538 [2024-04-24 22:15:50.673973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.538 [2024-04-24 22:15:50.674206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.538 [2024-04-24 22:15:50.674256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.538 qpair failed and we were unable to recover it. 00:24:08.538 [2024-04-24 22:15:50.674423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.538 [2024-04-24 22:15:50.674620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.538 [2024-04-24 22:15:50.674648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.538 qpair failed and we were unable to recover it. 00:24:08.538 [2024-04-24 22:15:50.674863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.538 [2024-04-24 22:15:50.675039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.538 [2024-04-24 22:15:50.675066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.538 qpair failed and we were unable to recover it. 00:24:08.538 [2024-04-24 22:15:50.675248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.538 [2024-04-24 22:15:50.675410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.538 [2024-04-24 22:15:50.675437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.538 qpair failed and we were unable to recover it. 00:24:08.538 [2024-04-24 22:15:50.675641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.538 [2024-04-24 22:15:50.675814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.538 [2024-04-24 22:15:50.675841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.538 qpair failed and we were unable to recover it. 00:24:08.538 [2024-04-24 22:15:50.676002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.538 [2024-04-24 22:15:50.676183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.538 [2024-04-24 22:15:50.676210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.538 qpair failed and we were unable to recover it. 00:24:08.538 [2024-04-24 22:15:50.676346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.538 [2024-04-24 22:15:50.676476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.538 [2024-04-24 22:15:50.676504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.538 qpair failed and we were unable to recover it. 00:24:08.538 [2024-04-24 22:15:50.676681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.538 [2024-04-24 22:15:50.676845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.538 [2024-04-24 22:15:50.676872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.538 qpair failed and we were unable to recover it. 00:24:08.538 [2024-04-24 22:15:50.677060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.538 [2024-04-24 22:15:50.677230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.538 [2024-04-24 22:15:50.677257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.538 qpair failed and we were unable to recover it. 00:24:08.538 [2024-04-24 22:15:50.677432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.538 [2024-04-24 22:15:50.677595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.538 [2024-04-24 22:15:50.677622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.538 qpair failed and we were unable to recover it. 00:24:08.538 [2024-04-24 22:15:50.677792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.538 [2024-04-24 22:15:50.677951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.677978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.539 qpair failed and we were unable to recover it. 00:24:08.539 [2024-04-24 22:15:50.678104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.678260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.678287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.539 qpair failed and we were unable to recover it. 00:24:08.539 [2024-04-24 22:15:50.678474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.678608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.678635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.539 qpair failed and we were unable to recover it. 00:24:08.539 [2024-04-24 22:15:50.678835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.679003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.679030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.539 qpair failed and we were unable to recover it. 00:24:08.539 [2024-04-24 22:15:50.679213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.679430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.679458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.539 qpair failed and we were unable to recover it. 00:24:08.539 [2024-04-24 22:15:50.679656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.679817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.679843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.539 qpair failed and we were unable to recover it. 00:24:08.539 [2024-04-24 22:15:50.680058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.680219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.680246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.539 qpair failed and we were unable to recover it. 00:24:08.539 [2024-04-24 22:15:50.680460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.680633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.680661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.539 qpair failed and we were unable to recover it. 00:24:08.539 [2024-04-24 22:15:50.680862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.681060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.681087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.539 qpair failed and we were unable to recover it. 00:24:08.539 [2024-04-24 22:15:50.681246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.681441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.681469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.539 qpair failed and we were unable to recover it. 00:24:08.539 [2024-04-24 22:15:50.681693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.681901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.681928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.539 qpair failed and we were unable to recover it. 00:24:08.539 [2024-04-24 22:15:50.682180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.682380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.682421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.539 qpair failed and we were unable to recover it. 00:24:08.539 [2024-04-24 22:15:50.682626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.682769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.682795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.539 qpair failed and we were unable to recover it. 00:24:08.539 [2024-04-24 22:15:50.683002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.683173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.683200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.539 qpair failed and we were unable to recover it. 00:24:08.539 [2024-04-24 22:15:50.683409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.683603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.683630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.539 qpair failed and we were unable to recover it. 00:24:08.539 [2024-04-24 22:15:50.683817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.683985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.684012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.539 qpair failed and we were unable to recover it. 00:24:08.539 [2024-04-24 22:15:50.684158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.684340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.684367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.539 qpair failed and we were unable to recover it. 00:24:08.539 [2024-04-24 22:15:50.684544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.684717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.684744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.539 qpair failed and we were unable to recover it. 00:24:08.539 [2024-04-24 22:15:50.684913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.685132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.685159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.539 qpair failed and we were unable to recover it. 00:24:08.539 [2024-04-24 22:15:50.685321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.685533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.685582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.539 qpair failed and we were unable to recover it. 00:24:08.539 [2024-04-24 22:15:50.685779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.539 [2024-04-24 22:15:50.685968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.686017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.540 qpair failed and we were unable to recover it. 00:24:08.540 [2024-04-24 22:15:50.686212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.686416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.686444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.540 qpair failed and we were unable to recover it. 00:24:08.540 [2024-04-24 22:15:50.686656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.686856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.686883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.540 qpair failed and we were unable to recover it. 00:24:08.540 [2024-04-24 22:15:50.687086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.687212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.687239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.540 qpair failed and we were unable to recover it. 00:24:08.540 [2024-04-24 22:15:50.687443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.687606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.687634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.540 qpair failed and we were unable to recover it. 00:24:08.540 [2024-04-24 22:15:50.687839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.688020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.688047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.540 qpair failed and we were unable to recover it. 00:24:08.540 [2024-04-24 22:15:50.688238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.688407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.688435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.540 qpair failed and we were unable to recover it. 00:24:08.540 [2024-04-24 22:15:50.688575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.688739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.688766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.540 qpair failed and we were unable to recover it. 00:24:08.540 [2024-04-24 22:15:50.688938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.689076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.689104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.540 qpair failed and we were unable to recover it. 00:24:08.540 [2024-04-24 22:15:50.689300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.689465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.689493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.540 qpair failed and we were unable to recover it. 00:24:08.540 [2024-04-24 22:15:50.689622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.689790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.689817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.540 qpair failed and we were unable to recover it. 00:24:08.540 [2024-04-24 22:15:50.689986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.690154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.690180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.540 qpair failed and we were unable to recover it. 00:24:08.540 [2024-04-24 22:15:50.690343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.690546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.690573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.540 qpair failed and we were unable to recover it. 00:24:08.540 [2024-04-24 22:15:50.690777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.690972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.690998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.540 qpair failed and we were unable to recover it. 00:24:08.540 [2024-04-24 22:15:50.691197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.691407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.691434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.540 qpair failed and we were unable to recover it. 00:24:08.540 [2024-04-24 22:15:50.691644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.691844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.691871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.540 qpair failed and we were unable to recover it. 00:24:08.540 [2024-04-24 22:15:50.692070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.692260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.692287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.540 qpair failed and we were unable to recover it. 00:24:08.540 [2024-04-24 22:15:50.692468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.692639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.692666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.540 qpair failed and we were unable to recover it. 00:24:08.540 [2024-04-24 22:15:50.692871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.693034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.693061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.540 qpair failed and we were unable to recover it. 00:24:08.540 [2024-04-24 22:15:50.693225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.693431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.693459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.540 qpair failed and we were unable to recover it. 00:24:08.540 [2024-04-24 22:15:50.693630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.540 [2024-04-24 22:15:50.693824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.693851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.541 qpair failed and we were unable to recover it. 00:24:08.541 [2024-04-24 22:15:50.694020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.694149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.694176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.541 qpair failed and we were unable to recover it. 00:24:08.541 [2024-04-24 22:15:50.694390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.694580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.694607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.541 qpair failed and we were unable to recover it. 00:24:08.541 [2024-04-24 22:15:50.694773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.694976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.695003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.541 qpair failed and we were unable to recover it. 00:24:08.541 [2024-04-24 22:15:50.695188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.695360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.695387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.541 qpair failed and we were unable to recover it. 00:24:08.541 [2024-04-24 22:15:50.695569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.695766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.695792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.541 qpair failed and we were unable to recover it. 00:24:08.541 [2024-04-24 22:15:50.695969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.696160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.696187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.541 qpair failed and we were unable to recover it. 00:24:08.541 [2024-04-24 22:15:50.696341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.696504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.696532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.541 qpair failed and we were unable to recover it. 00:24:08.541 [2024-04-24 22:15:50.696728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.696912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.696939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.541 qpair failed and we were unable to recover it. 00:24:08.541 [2024-04-24 22:15:50.697150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.697361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.697388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.541 qpair failed and we were unable to recover it. 00:24:08.541 [2024-04-24 22:15:50.697614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.697743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.697770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.541 qpair failed and we were unable to recover it. 00:24:08.541 [2024-04-24 22:15:50.697940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.698116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.698143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.541 qpair failed and we were unable to recover it. 00:24:08.541 [2024-04-24 22:15:50.698359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.698582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.698611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.541 qpair failed and we were unable to recover it. 00:24:08.541 [2024-04-24 22:15:50.698838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.699033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.699060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.541 qpair failed and we were unable to recover it. 00:24:08.541 [2024-04-24 22:15:50.699245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.699410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.699439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.541 qpair failed and we were unable to recover it. 00:24:08.541 [2024-04-24 22:15:50.699612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.699807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.699834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.541 qpair failed and we were unable to recover it. 00:24:08.541 [2024-04-24 22:15:50.700000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.700206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.700233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.541 qpair failed and we were unable to recover it. 00:24:08.541 [2024-04-24 22:15:50.700413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.700628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.700656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.541 qpair failed and we were unable to recover it. 00:24:08.541 [2024-04-24 22:15:50.700853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.701024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.701051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.541 qpair failed and we were unable to recover it. 00:24:08.541 [2024-04-24 22:15:50.701233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.701435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.541 [2024-04-24 22:15:50.701463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.541 qpair failed and we were unable to recover it. 00:24:08.541 [2024-04-24 22:15:50.701632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.701820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.701847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.542 qpair failed and we were unable to recover it. 00:24:08.542 [2024-04-24 22:15:50.702019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.702190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.702217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.542 qpair failed and we were unable to recover it. 00:24:08.542 [2024-04-24 22:15:50.702421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.702632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.702658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.542 qpair failed and we were unable to recover it. 00:24:08.542 [2024-04-24 22:15:50.702866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.703062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.703089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.542 qpair failed and we were unable to recover it. 00:24:08.542 [2024-04-24 22:15:50.703249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.703454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.703482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.542 qpair failed and we were unable to recover it. 00:24:08.542 [2024-04-24 22:15:50.703681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.703874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.703901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.542 qpair failed and we were unable to recover it. 00:24:08.542 [2024-04-24 22:15:50.704067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.704264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.704291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.542 qpair failed and we were unable to recover it. 00:24:08.542 [2024-04-24 22:15:50.704502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.704688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.704715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.542 qpair failed and we were unable to recover it. 00:24:08.542 [2024-04-24 22:15:50.704879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.705064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.705091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.542 qpair failed and we were unable to recover it. 00:24:08.542 [2024-04-24 22:15:50.705231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.705405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.705434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.542 qpair failed and we were unable to recover it. 00:24:08.542 [2024-04-24 22:15:50.705566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.705763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.705790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.542 qpair failed and we were unable to recover it. 00:24:08.542 [2024-04-24 22:15:50.705958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.706159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.706186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.542 qpair failed and we were unable to recover it. 00:24:08.542 [2024-04-24 22:15:50.706331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.706495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.706523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.542 qpair failed and we were unable to recover it. 00:24:08.542 [2024-04-24 22:15:50.706651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.706843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.706870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.542 qpair failed and we were unable to recover it. 00:24:08.542 [2024-04-24 22:15:50.707058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.707255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.707282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.542 qpair failed and we were unable to recover it. 00:24:08.542 [2024-04-24 22:15:50.707445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.707642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.707669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.542 qpair failed and we were unable to recover it. 00:24:08.542 [2024-04-24 22:15:50.707839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.708036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.708063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.542 qpair failed and we were unable to recover it. 00:24:08.542 [2024-04-24 22:15:50.708309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.708478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.708506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.542 qpair failed and we were unable to recover it. 00:24:08.542 [2024-04-24 22:15:50.708642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.708801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.708827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.542 qpair failed and we were unable to recover it. 00:24:08.542 [2024-04-24 22:15:50.709018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.709205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.542 [2024-04-24 22:15:50.709231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.542 qpair failed and we were unable to recover it. 00:24:08.543 [2024-04-24 22:15:50.709407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.709544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.709572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.543 qpair failed and we were unable to recover it. 00:24:08.543 [2024-04-24 22:15:50.709766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.709963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.709990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.543 qpair failed and we were unable to recover it. 00:24:08.543 [2024-04-24 22:15:50.710218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.710421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.710449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.543 qpair failed and we were unable to recover it. 00:24:08.543 [2024-04-24 22:15:50.710719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.710880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.710907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.543 qpair failed and we were unable to recover it. 00:24:08.543 [2024-04-24 22:15:50.711090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.711268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.711296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.543 qpair failed and we were unable to recover it. 00:24:08.543 [2024-04-24 22:15:50.711470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.711616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.711643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.543 qpair failed and we were unable to recover it. 00:24:08.543 [2024-04-24 22:15:50.711813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.711986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.712013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.543 qpair failed and we were unable to recover it. 00:24:08.543 [2024-04-24 22:15:50.712185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.712350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.712378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.543 qpair failed and we were unable to recover it. 00:24:08.543 [2024-04-24 22:15:50.712548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.712744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.712771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.543 qpair failed and we were unable to recover it. 00:24:08.543 [2024-04-24 22:15:50.712931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.713175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.713202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.543 qpair failed and we were unable to recover it. 00:24:08.543 [2024-04-24 22:15:50.713414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.713585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.713612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.543 qpair failed and we were unable to recover it. 00:24:08.543 [2024-04-24 22:15:50.713813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.714007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.714033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.543 qpair failed and we were unable to recover it. 00:24:08.543 [2024-04-24 22:15:50.714293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.714436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.714465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.543 qpair failed and we were unable to recover it. 00:24:08.543 [2024-04-24 22:15:50.714671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.714843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.714870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.543 qpair failed and we were unable to recover it. 00:24:08.543 [2024-04-24 22:15:50.715026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.715155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.715182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.543 qpair failed and we were unable to recover it. 00:24:08.543 [2024-04-24 22:15:50.715377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.715587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.715614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.543 qpair failed and we were unable to recover it. 00:24:08.543 [2024-04-24 22:15:50.715826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.716026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.716052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.543 qpair failed and we were unable to recover it. 00:24:08.543 [2024-04-24 22:15:50.716248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.716420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.716453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.543 qpair failed and we were unable to recover it. 00:24:08.543 [2024-04-24 22:15:50.716648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.543 [2024-04-24 22:15:50.716812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.716838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.544 qpair failed and we were unable to recover it. 00:24:08.544 [2024-04-24 22:15:50.717033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.717208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.717235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.544 qpair failed and we were unable to recover it. 00:24:08.544 [2024-04-24 22:15:50.717410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.717637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.717665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.544 qpair failed and we were unable to recover it. 00:24:08.544 [2024-04-24 22:15:50.717862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.718059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.718086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.544 qpair failed and we were unable to recover it. 00:24:08.544 [2024-04-24 22:15:50.718284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.718445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.718473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.544 qpair failed and we were unable to recover it. 00:24:08.544 [2024-04-24 22:15:50.718659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.718810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.718838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.544 qpair failed and we were unable to recover it. 00:24:08.544 [2024-04-24 22:15:50.719019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.719194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.719221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.544 qpair failed and we were unable to recover it. 00:24:08.544 [2024-04-24 22:15:50.719407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.719554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.719581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.544 qpair failed and we were unable to recover it. 00:24:08.544 [2024-04-24 22:15:50.719778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.719946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.719973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.544 qpair failed and we were unable to recover it. 00:24:08.544 [2024-04-24 22:15:50.720172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.720373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.720417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.544 qpair failed and we were unable to recover it. 00:24:08.544 [2024-04-24 22:15:50.720630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.720838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.720865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.544 qpair failed and we were unable to recover it. 00:24:08.544 [2024-04-24 22:15:50.721065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.721248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.721275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.544 qpair failed and we were unable to recover it. 00:24:08.544 [2024-04-24 22:15:50.721444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.721647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.721674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.544 qpair failed and we were unable to recover it. 00:24:08.544 [2024-04-24 22:15:50.721926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.722095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.722122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.544 qpair failed and we were unable to recover it. 00:24:08.544 [2024-04-24 22:15:50.722318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.722504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.722532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.544 qpair failed and we were unable to recover it. 00:24:08.544 [2024-04-24 22:15:50.722692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.722884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.722911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.544 qpair failed and we were unable to recover it. 00:24:08.544 [2024-04-24 22:15:50.723078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.723255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.723282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.544 qpair failed and we were unable to recover it. 00:24:08.544 [2024-04-24 22:15:50.723491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.723667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.723695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.544 qpair failed and we were unable to recover it. 00:24:08.544 [2024-04-24 22:15:50.723866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.724042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.724069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.544 qpair failed and we were unable to recover it. 00:24:08.544 [2024-04-24 22:15:50.724249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.724457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.544 [2024-04-24 22:15:50.724490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.546 qpair failed and we were unable to recover it. 00:24:08.547 [2024-04-24 22:15:50.724692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.547 [2024-04-24 22:15:50.724863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.547 [2024-04-24 22:15:50.724890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.547 qpair failed and we were unable to recover it. 00:24:08.547 [2024-04-24 22:15:50.725088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.547 [2024-04-24 22:15:50.725252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.547 [2024-04-24 22:15:50.725279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.547 qpair failed and we were unable to recover it. 00:24:08.547 [2024-04-24 22:15:50.725476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.547 [2024-04-24 22:15:50.725704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.547 [2024-04-24 22:15:50.725731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.547 qpair failed and we were unable to recover it. 00:24:08.547 [2024-04-24 22:15:50.725928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.547 [2024-04-24 22:15:50.726124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.547 [2024-04-24 22:15:50.726172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.547 qpair failed and we were unable to recover it. 00:24:08.547 [2024-04-24 22:15:50.726375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.547 [2024-04-24 22:15:50.726528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.547 [2024-04-24 22:15:50.726556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.547 qpair failed and we were unable to recover it. 00:24:08.547 [2024-04-24 22:15:50.726746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.547 [2024-04-24 22:15:50.726937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.547 [2024-04-24 22:15:50.726964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.547 qpair failed and we were unable to recover it. 00:24:08.547 [2024-04-24 22:15:50.727146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.547 [2024-04-24 22:15:50.727334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.547 [2024-04-24 22:15:50.727361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.547 qpair failed and we were unable to recover it. 00:24:08.547 [2024-04-24 22:15:50.727549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.547 [2024-04-24 22:15:50.727720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.547 [2024-04-24 22:15:50.727747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.547 qpair failed and we were unable to recover it. 00:24:08.547 [2024-04-24 22:15:50.727951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.547 [2024-04-24 22:15:50.728129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.547 [2024-04-24 22:15:50.728156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.547 qpair failed and we were unable to recover it. 00:24:08.547 [2024-04-24 22:15:50.728358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.547 [2024-04-24 22:15:50.728583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.547 [2024-04-24 22:15:50.728616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.547 qpair failed and we were unable to recover it. 00:24:08.547 [2024-04-24 22:15:50.728784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.547 [2024-04-24 22:15:50.728987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.547 [2024-04-24 22:15:50.729014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.547 qpair failed and we were unable to recover it. 00:24:08.547 [2024-04-24 22:15:50.729243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.547 [2024-04-24 22:15:50.729443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.547 [2024-04-24 22:15:50.729472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.547 qpair failed and we were unable to recover it. 00:24:08.548 [2024-04-24 22:15:50.729643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.548 [2024-04-24 22:15:50.729811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.548 [2024-04-24 22:15:50.729838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.548 qpair failed and we were unable to recover it. 00:24:08.548 [2024-04-24 22:15:50.730010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.548 [2024-04-24 22:15:50.730140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.548 [2024-04-24 22:15:50.730167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.548 qpair failed and we were unable to recover it. 00:24:08.548 [2024-04-24 22:15:50.730343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.548 [2024-04-24 22:15:50.730495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.548 [2024-04-24 22:15:50.730523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.548 qpair failed and we were unable to recover it. 00:24:08.548 [2024-04-24 22:15:50.730711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.548 [2024-04-24 22:15:50.730918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.548 [2024-04-24 22:15:50.730945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.548 qpair failed and we were unable to recover it. 00:24:08.548 [2024-04-24 22:15:50.731141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.548 [2024-04-24 22:15:50.731267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.548 [2024-04-24 22:15:50.731294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.548 qpair failed and we were unable to recover it. 00:24:08.548 [2024-04-24 22:15:50.731496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.548 [2024-04-24 22:15:50.731674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.548 [2024-04-24 22:15:50.731701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.548 qpair failed and we were unable to recover it. 00:24:08.548 [2024-04-24 22:15:50.731911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.548 [2024-04-24 22:15:50.732090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.548 [2024-04-24 22:15:50.732117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.548 qpair failed and we were unable to recover it. 00:24:08.548 [2024-04-24 22:15:50.732279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.548 [2024-04-24 22:15:50.732481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.548 [2024-04-24 22:15:50.732509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.548 qpair failed and we were unable to recover it. 00:24:08.548 [2024-04-24 22:15:50.732638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.548 [2024-04-24 22:15:50.732800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.548 [2024-04-24 22:15:50.732826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.548 qpair failed and we were unable to recover it. 00:24:08.548 [2024-04-24 22:15:50.733046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.548 [2024-04-24 22:15:50.733171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.548 [2024-04-24 22:15:50.733198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.548 qpair failed and we were unable to recover it. 00:24:08.548 [2024-04-24 22:15:50.733389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.548 [2024-04-24 22:15:50.733616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.548 [2024-04-24 22:15:50.733644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.548 qpair failed and we were unable to recover it. 00:24:08.548 [2024-04-24 22:15:50.733803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.548 [2024-04-24 22:15:50.733988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.548 [2024-04-24 22:15:50.734015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.548 qpair failed and we were unable to recover it. 00:24:08.548 [2024-04-24 22:15:50.734171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.548 [2024-04-24 22:15:50.734333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.549 [2024-04-24 22:15:50.734360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.549 qpair failed and we were unable to recover it. 00:24:08.549 [2024-04-24 22:15:50.734553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.549 [2024-04-24 22:15:50.734707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.549 [2024-04-24 22:15:50.734734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.549 qpair failed and we were unable to recover it. 00:24:08.549 [2024-04-24 22:15:50.734945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.549 [2024-04-24 22:15:50.735112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.549 [2024-04-24 22:15:50.735139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.549 qpair failed and we were unable to recover it. 00:24:08.549 [2024-04-24 22:15:50.735335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.549 [2024-04-24 22:15:50.735476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.549 [2024-04-24 22:15:50.735505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.549 qpair failed and we were unable to recover it. 00:24:08.549 [2024-04-24 22:15:50.735685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.549 [2024-04-24 22:15:50.735883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.549 [2024-04-24 22:15:50.735910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.549 qpair failed and we were unable to recover it. 00:24:08.549 [2024-04-24 22:15:50.736060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.549 [2024-04-24 22:15:50.736258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.549 [2024-04-24 22:15:50.736285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.549 qpair failed and we were unable to recover it. 00:24:08.549 [2024-04-24 22:15:50.736474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.549 [2024-04-24 22:15:50.736652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.549 [2024-04-24 22:15:50.736679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.549 qpair failed and we were unable to recover it. 00:24:08.549 [2024-04-24 22:15:50.736879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.549 [2024-04-24 22:15:50.737070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.549 [2024-04-24 22:15:50.737097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.549 qpair failed and we were unable to recover it. 00:24:08.549 [2024-04-24 22:15:50.737306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.549 [2024-04-24 22:15:50.737441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.549 [2024-04-24 22:15:50.737469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.549 qpair failed and we were unable to recover it. 00:24:08.549 [2024-04-24 22:15:50.737656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.549 [2024-04-24 22:15:50.737826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.549 [2024-04-24 22:15:50.737853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.549 qpair failed and we were unable to recover it. 00:24:08.549 [2024-04-24 22:15:50.738024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.549 [2024-04-24 22:15:50.738231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.549 [2024-04-24 22:15:50.738258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.549 qpair failed and we were unable to recover it. 00:24:08.549 [2024-04-24 22:15:50.738448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.549 [2024-04-24 22:15:50.738611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.549 [2024-04-24 22:15:50.738638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.550 qpair failed and we were unable to recover it. 00:24:08.550 [2024-04-24 22:15:50.738775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.550 [2024-04-24 22:15:50.738947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.550 [2024-04-24 22:15:50.738974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.550 qpair failed and we were unable to recover it. 00:24:08.550 [2024-04-24 22:15:50.739147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.550 [2024-04-24 22:15:50.739306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.550 [2024-04-24 22:15:50.739333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.550 qpair failed and we were unable to recover it. 00:24:08.550 [2024-04-24 22:15:50.739536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.550 [2024-04-24 22:15:50.739744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.550 [2024-04-24 22:15:50.739771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.550 qpair failed and we were unable to recover it. 00:24:08.550 [2024-04-24 22:15:50.739969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.550 [2024-04-24 22:15:50.740174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.550 [2024-04-24 22:15:50.740201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.550 qpair failed and we were unable to recover it. 00:24:08.550 [2024-04-24 22:15:50.740444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.550 [2024-04-24 22:15:50.740643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.550 [2024-04-24 22:15:50.740693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.550 qpair failed and we were unable to recover it. 00:24:08.550 [2024-04-24 22:15:50.740929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.550 [2024-04-24 22:15:50.741142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.550 [2024-04-24 22:15:50.741191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.550 qpair failed and we were unable to recover it. 00:24:08.550 [2024-04-24 22:15:50.741373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.550 [2024-04-24 22:15:50.741683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.550 [2024-04-24 22:15:50.741726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.550 qpair failed and we were unable to recover it. 00:24:08.550 [2024-04-24 22:15:50.741951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.550 [2024-04-24 22:15:50.742170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.550 [2024-04-24 22:15:50.742198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.550 qpair failed and we were unable to recover it. 00:24:08.550 [2024-04-24 22:15:50.742409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.550 [2024-04-24 22:15:50.742581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.550 [2024-04-24 22:15:50.742608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.550 qpair failed and we were unable to recover it. 00:24:08.550 [2024-04-24 22:15:50.742783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.550 [2024-04-24 22:15:50.742918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.550 [2024-04-24 22:15:50.742945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.550 qpair failed and we were unable to recover it. 00:24:08.550 [2024-04-24 22:15:50.743158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.550 [2024-04-24 22:15:50.743317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.550 [2024-04-24 22:15:50.743344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.550 qpair failed and we were unable to recover it. 00:24:08.550 [2024-04-24 22:15:50.743514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.550 [2024-04-24 22:15:50.743726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.550 [2024-04-24 22:15:50.743754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.550 qpair failed and we were unable to recover it. 00:24:08.550 [2024-04-24 22:15:50.743934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.550 [2024-04-24 22:15:50.744131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.550 [2024-04-24 22:15:50.744158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.550 qpair failed and we were unable to recover it. 00:24:08.550 [2024-04-24 22:15:50.744342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.550 [2024-04-24 22:15:50.744553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.550 [2024-04-24 22:15:50.744581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.550 qpair failed and we were unable to recover it. 00:24:08.550 [2024-04-24 22:15:50.744835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.745042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.745069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.551 qpair failed and we were unable to recover it. 00:24:08.551 [2024-04-24 22:15:50.745202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.745346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.745373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.551 qpair failed and we were unable to recover it. 00:24:08.551 [2024-04-24 22:15:50.745598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.745772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.745822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.551 qpair failed and we were unable to recover it. 00:24:08.551 [2024-04-24 22:15:50.745960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.746140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.746167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.551 qpair failed and we were unable to recover it. 00:24:08.551 [2024-04-24 22:15:50.746340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.746521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.746577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.551 qpair failed and we were unable to recover it. 00:24:08.551 [2024-04-24 22:15:50.746779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.747006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.747057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.551 qpair failed and we were unable to recover it. 00:24:08.551 [2024-04-24 22:15:50.747266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.747408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.747445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.551 qpair failed and we were unable to recover it. 00:24:08.551 [2024-04-24 22:15:50.747653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.747824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.747851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.551 qpair failed and we were unable to recover it. 00:24:08.551 [2024-04-24 22:15:50.748058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.748256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.748283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.551 qpair failed and we were unable to recover it. 00:24:08.551 [2024-04-24 22:15:50.748552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.748741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.748768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.551 qpair failed and we were unable to recover it. 00:24:08.551 [2024-04-24 22:15:50.749001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.749154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.749182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.551 qpair failed and we were unable to recover it. 00:24:08.551 [2024-04-24 22:15:50.749333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.749557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.749586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.551 qpair failed and we were unable to recover it. 00:24:08.551 [2024-04-24 22:15:50.749778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.749942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.749969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.551 qpair failed and we were unable to recover it. 00:24:08.551 [2024-04-24 22:15:50.750147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.750346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.750373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.551 qpair failed and we were unable to recover it. 00:24:08.551 [2024-04-24 22:15:50.750579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.750752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.551 [2024-04-24 22:15:50.750779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.551 qpair failed and we were unable to recover it. 00:24:08.551 [2024-04-24 22:15:50.750949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.751149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.751176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.751416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.751587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.751614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.751836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.752007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.752034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.752230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.752441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.752470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.752653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.752819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.752847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.753025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.753192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.753242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.753455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.753630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.753656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.753837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.754044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.754071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.754281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.754444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.754472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.754634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.754850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.754877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.755081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.755280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.755307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.755436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.755602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.755629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.755801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.755979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.756006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.756184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.756328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.756355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.756569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.756726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.756753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.756936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.757132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.757159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.757328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.757509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.757538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.757693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.757876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.757903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.758123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.758323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.758350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.758532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.758727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.758754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.758917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.759052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.759079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.759251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.759430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.759458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.759593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.759754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.759781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.759959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.760083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.760111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.760276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.760474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.760502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.760670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.760837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.760865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.761073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.761246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.761274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.761437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.761648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.761675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.761894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.762029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.762056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.762200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.762409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.762436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.762646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.762851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.762878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.763059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.763265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.763292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.763456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.763651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.763679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.763832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.764003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.764030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.764238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.764414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.552 [2024-04-24 22:15:50.764442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.552 qpair failed and we were unable to recover it. 00:24:08.552 [2024-04-24 22:15:50.764611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.553 [2024-04-24 22:15:50.764842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.553 [2024-04-24 22:15:50.764869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.553 qpair failed and we were unable to recover it. 00:24:08.553 [2024-04-24 22:15:50.765048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.553 [2024-04-24 22:15:50.765266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.553 [2024-04-24 22:15:50.765293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.553 qpair failed and we were unable to recover it. 00:24:08.553 [2024-04-24 22:15:50.765463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.553 [2024-04-24 22:15:50.765650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.553 [2024-04-24 22:15:50.765677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.553 qpair failed and we were unable to recover it. 00:24:08.553 [2024-04-24 22:15:50.765854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.553 [2024-04-24 22:15:50.766062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.553 [2024-04-24 22:15:50.766089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.553 qpair failed and we were unable to recover it. 00:24:08.553 [2024-04-24 22:15:50.766304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.553 [2024-04-24 22:15:50.766467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.553 [2024-04-24 22:15:50.766494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.553 qpair failed and we were unable to recover it. 00:24:08.553 [2024-04-24 22:15:50.766677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.553 [2024-04-24 22:15:50.766865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.553 [2024-04-24 22:15:50.766892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.553 qpair failed and we were unable to recover it. 00:24:08.553 [2024-04-24 22:15:50.767098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.876 [2024-04-24 22:15:50.767275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.876 [2024-04-24 22:15:50.767302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.876 qpair failed and we were unable to recover it. 00:24:08.876 [2024-04-24 22:15:50.767502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.876 [2024-04-24 22:15:50.767677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.876 [2024-04-24 22:15:50.767706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.876 qpair failed and we were unable to recover it. 00:24:08.876 [2024-04-24 22:15:50.767870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.876 [2024-04-24 22:15:50.768034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.876 [2024-04-24 22:15:50.768061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.876 qpair failed and we were unable to recover it. 00:24:08.876 [2024-04-24 22:15:50.768263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.876 [2024-04-24 22:15:50.768453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.876 [2024-04-24 22:15:50.768482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.876 qpair failed and we were unable to recover it. 00:24:08.876 [2024-04-24 22:15:50.768697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.876 [2024-04-24 22:15:50.768909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.876 [2024-04-24 22:15:50.768936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.876 qpair failed and we were unable to recover it. 00:24:08.876 [2024-04-24 22:15:50.769137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.876 [2024-04-24 22:15:50.769298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.876 [2024-04-24 22:15:50.769325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.876 qpair failed and we were unable to recover it. 00:24:08.876 [2024-04-24 22:15:50.769526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.876 [2024-04-24 22:15:50.769704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.876 [2024-04-24 22:15:50.769732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.876 qpair failed and we were unable to recover it. 00:24:08.876 [2024-04-24 22:15:50.769898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.876 [2024-04-24 22:15:50.770086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.876 [2024-04-24 22:15:50.770113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.876 qpair failed and we were unable to recover it. 00:24:08.876 [2024-04-24 22:15:50.770254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.876 [2024-04-24 22:15:50.770448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.876 [2024-04-24 22:15:50.770476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.876 qpair failed and we were unable to recover it. 00:24:08.876 [2024-04-24 22:15:50.770660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.876 [2024-04-24 22:15:50.770862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.876 [2024-04-24 22:15:50.770888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.876 qpair failed and we were unable to recover it. 00:24:08.876 [2024-04-24 22:15:50.771123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.876 [2024-04-24 22:15:50.771290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.876 [2024-04-24 22:15:50.771316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.876 qpair failed and we were unable to recover it. 00:24:08.876 [2024-04-24 22:15:50.771512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.876 [2024-04-24 22:15:50.771644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.876 [2024-04-24 22:15:50.771671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.876 qpair failed and we were unable to recover it. 00:24:08.876 [2024-04-24 22:15:50.771871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.876 [2024-04-24 22:15:50.772061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.876 [2024-04-24 22:15:50.772088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.876 qpair failed and we were unable to recover it. 00:24:08.876 [2024-04-24 22:15:50.772267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.772513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.772541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.877 qpair failed and we were unable to recover it. 00:24:08.877 [2024-04-24 22:15:50.772748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.772929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.772957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.877 qpair failed and we were unable to recover it. 00:24:08.877 [2024-04-24 22:15:50.773122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.773294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.773322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.877 qpair failed and we were unable to recover it. 00:24:08.877 [2024-04-24 22:15:50.773519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.773684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.773712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.877 qpair failed and we were unable to recover it. 00:24:08.877 [2024-04-24 22:15:50.773911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.774114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.774141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.877 qpair failed and we were unable to recover it. 00:24:08.877 [2024-04-24 22:15:50.774291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.774488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.774516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.877 qpair failed and we were unable to recover it. 00:24:08.877 [2024-04-24 22:15:50.774711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.774907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.774934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.877 qpair failed and we were unable to recover it. 00:24:08.877 [2024-04-24 22:15:50.775135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.775305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.775332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.877 qpair failed and we were unable to recover it. 00:24:08.877 [2024-04-24 22:15:50.775506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.775716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.775743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.877 qpair failed and we were unable to recover it. 00:24:08.877 [2024-04-24 22:15:50.775903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.776058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.776085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.877 qpair failed and we were unable to recover it. 00:24:08.877 [2024-04-24 22:15:50.776246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.776425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.776453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.877 qpair failed and we were unable to recover it. 00:24:08.877 [2024-04-24 22:15:50.776650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.776812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.776844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.877 qpair failed and we were unable to recover it. 00:24:08.877 [2024-04-24 22:15:50.777033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.777237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.777264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.877 qpair failed and we were unable to recover it. 00:24:08.877 [2024-04-24 22:15:50.777455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.777668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.777695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.877 qpair failed and we were unable to recover it. 00:24:08.877 [2024-04-24 22:15:50.777916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.778047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.778076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.877 qpair failed and we were unable to recover it. 00:24:08.877 [2024-04-24 22:15:50.778256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.778383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.778418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.877 qpair failed and we were unable to recover it. 00:24:08.877 [2024-04-24 22:15:50.778590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.778794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.778820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.877 qpair failed and we were unable to recover it. 00:24:08.877 [2024-04-24 22:15:50.778994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.779139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.779166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.877 qpair failed and we were unable to recover it. 00:24:08.877 [2024-04-24 22:15:50.779365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.779534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.779562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.877 qpair failed and we were unable to recover it. 00:24:08.877 [2024-04-24 22:15:50.779701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.779881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.779909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.877 qpair failed and we were unable to recover it. 00:24:08.877 [2024-04-24 22:15:50.780087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.780282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.780309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.877 qpair failed and we were unable to recover it. 00:24:08.877 [2024-04-24 22:15:50.780519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.780714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.780748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.877 qpair failed and we were unable to recover it. 00:24:08.877 [2024-04-24 22:15:50.780954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.781115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.781142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.877 qpair failed and we were unable to recover it. 00:24:08.877 [2024-04-24 22:15:50.781309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.781517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.781545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.877 qpair failed and we were unable to recover it. 00:24:08.877 [2024-04-24 22:15:50.781753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.781917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.781944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.877 qpair failed and we were unable to recover it. 00:24:08.877 [2024-04-24 22:15:50.782152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.782362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.782389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.877 qpair failed and we were unable to recover it. 00:24:08.877 [2024-04-24 22:15:50.782560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.782713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.782740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.877 qpair failed and we were unable to recover it. 00:24:08.877 [2024-04-24 22:15:50.782921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.783128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.783155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.877 qpair failed and we were unable to recover it. 00:24:08.877 [2024-04-24 22:15:50.783400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.783557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.877 [2024-04-24 22:15:50.783584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.878 qpair failed and we were unable to recover it. 00:24:08.878 [2024-04-24 22:15:50.783789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.783936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.783963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.878 qpair failed and we were unable to recover it. 00:24:08.878 [2024-04-24 22:15:50.784159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.784314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.784341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.878 qpair failed and we were unable to recover it. 00:24:08.878 [2024-04-24 22:15:50.784522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.784690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.784730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.878 qpair failed and we were unable to recover it. 00:24:08.878 [2024-04-24 22:15:50.784893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.785109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.785135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.878 qpair failed and we were unable to recover it. 00:24:08.878 [2024-04-24 22:15:50.785347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.785558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.785586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.878 qpair failed and we were unable to recover it. 00:24:08.878 [2024-04-24 22:15:50.785786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.785992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.786019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.878 qpair failed and we were unable to recover it. 00:24:08.878 [2024-04-24 22:15:50.786218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.786430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.786459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.878 qpair failed and we were unable to recover it. 00:24:08.878 [2024-04-24 22:15:50.786669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.786832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.786859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.878 qpair failed and we were unable to recover it. 00:24:08.878 [2024-04-24 22:15:50.787064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.787263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.787290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.878 qpair failed and we were unable to recover it. 00:24:08.878 [2024-04-24 22:15:50.787486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.787640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.787667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.878 qpair failed and we were unable to recover it. 00:24:08.878 [2024-04-24 22:15:50.787851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.788016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.788043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.878 qpair failed and we were unable to recover it. 00:24:08.878 [2024-04-24 22:15:50.788254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.788401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.788429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.878 qpair failed and we were unable to recover it. 00:24:08.878 [2024-04-24 22:15:50.788565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.788748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.788780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.878 qpair failed and we were unable to recover it. 00:24:08.878 [2024-04-24 22:15:50.788995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.789189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.789215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.878 qpair failed and we were unable to recover it. 00:24:08.878 [2024-04-24 22:15:50.789451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.789588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.789615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.878 qpair failed and we were unable to recover it. 00:24:08.878 [2024-04-24 22:15:50.789802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.789978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.790005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.878 qpair failed and we were unable to recover it. 00:24:08.878 [2024-04-24 22:15:50.790211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.790410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.790438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.878 qpair failed and we were unable to recover it. 00:24:08.878 [2024-04-24 22:15:50.790610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.790777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.790804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.878 qpair failed and we were unable to recover it. 00:24:08.878 [2024-04-24 22:15:50.790941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.791143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.791170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.878 qpair failed and we were unable to recover it. 00:24:08.878 [2024-04-24 22:15:50.791343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.791502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.791531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.878 qpair failed and we were unable to recover it. 00:24:08.878 [2024-04-24 22:15:50.791697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.791848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.791876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.878 qpair failed and we were unable to recover it. 00:24:08.878 [2024-04-24 22:15:50.792024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.792234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.792261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.878 qpair failed and we were unable to recover it. 00:24:08.878 [2024-04-24 22:15:50.792387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.792563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.792590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.878 qpair failed and we were unable to recover it. 00:24:08.878 [2024-04-24 22:15:50.792784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.792938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.792965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.878 qpair failed and we were unable to recover it. 00:24:08.878 [2024-04-24 22:15:50.793141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.793276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.793303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.878 qpair failed and we were unable to recover it. 00:24:08.878 [2024-04-24 22:15:50.793466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.793653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.793680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.878 qpair failed and we were unable to recover it. 00:24:08.878 [2024-04-24 22:15:50.793845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.794032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.794059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.878 qpair failed and we were unable to recover it. 00:24:08.878 [2024-04-24 22:15:50.794216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.794342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.794369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.878 qpair failed and we were unable to recover it. 00:24:08.878 [2024-04-24 22:15:50.794519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.878 [2024-04-24 22:15:50.794656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.794683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.879 qpair failed and we were unable to recover it. 00:24:08.879 [2024-04-24 22:15:50.794844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.795003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.795030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.879 qpair failed and we were unable to recover it. 00:24:08.879 [2024-04-24 22:15:50.795186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.795346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.795373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.879 qpair failed and we were unable to recover it. 00:24:08.879 [2024-04-24 22:15:50.795570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.795729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.795755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.879 qpair failed and we were unable to recover it. 00:24:08.879 [2024-04-24 22:15:50.795907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.796092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.796119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.879 qpair failed and we were unable to recover it. 00:24:08.879 [2024-04-24 22:15:50.796315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.796477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.796506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.879 qpair failed and we were unable to recover it. 00:24:08.879 [2024-04-24 22:15:50.796696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.796876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.796903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.879 qpair failed and we were unable to recover it. 00:24:08.879 [2024-04-24 22:15:50.797097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.797277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.797305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.879 qpair failed and we were unable to recover it. 00:24:08.879 [2024-04-24 22:15:50.797479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.797632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.797659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.879 qpair failed and we were unable to recover it. 00:24:08.879 [2024-04-24 22:15:50.797819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.797979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.798007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.879 qpair failed and we were unable to recover it. 00:24:08.879 [2024-04-24 22:15:50.798193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.798355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.798381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.879 qpair failed and we were unable to recover it. 00:24:08.879 [2024-04-24 22:15:50.798578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.798751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.798779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.879 qpair failed and we were unable to recover it. 00:24:08.879 [2024-04-24 22:15:50.798935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.799095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.799122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.879 qpair failed and we were unable to recover it. 00:24:08.879 [2024-04-24 22:15:50.799311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.799473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.799501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.879 qpair failed and we were unable to recover it. 00:24:08.879 [2024-04-24 22:15:50.799688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.799846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.799873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.879 qpair failed and we were unable to recover it. 00:24:08.879 [2024-04-24 22:15:50.800036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.800166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.800193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.879 qpair failed and we were unable to recover it. 00:24:08.879 [2024-04-24 22:15:50.800319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.800464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.800492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.879 qpair failed and we were unable to recover it. 00:24:08.879 [2024-04-24 22:15:50.800649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.800806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.800833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.879 qpair failed and we were unable to recover it. 00:24:08.879 [2024-04-24 22:15:50.801020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.801179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.801206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.879 qpair failed and we were unable to recover it. 00:24:08.879 [2024-04-24 22:15:50.801368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.801561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.801589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.879 qpair failed and we were unable to recover it. 00:24:08.879 [2024-04-24 22:15:50.801788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.801969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.801996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.879 qpair failed and we were unable to recover it. 00:24:08.879 [2024-04-24 22:15:50.802180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.802339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.802367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.879 qpair failed and we were unable to recover it. 00:24:08.879 [2024-04-24 22:15:50.802533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.802715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.802743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.879 qpair failed and we were unable to recover it. 00:24:08.879 [2024-04-24 22:15:50.802907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.803030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.803058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.879 qpair failed and we were unable to recover it. 00:24:08.879 [2024-04-24 22:15:50.803244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.803438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.803466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.879 qpair failed and we were unable to recover it. 00:24:08.879 [2024-04-24 22:15:50.803660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.803821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.803848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.879 qpair failed and we were unable to recover it. 00:24:08.879 [2024-04-24 22:15:50.804009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.804162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.804189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.879 qpair failed and we were unable to recover it. 00:24:08.879 [2024-04-24 22:15:50.804325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.804444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.804472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.879 qpair failed and we were unable to recover it. 00:24:08.879 [2024-04-24 22:15:50.804598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.804770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.879 [2024-04-24 22:15:50.804797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.879 qpair failed and we were unable to recover it. 00:24:08.880 [2024-04-24 22:15:50.804958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.805116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.805143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.880 qpair failed and we were unable to recover it. 00:24:08.880 [2024-04-24 22:15:50.805303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.805458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.805486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.880 qpair failed and we were unable to recover it. 00:24:08.880 [2024-04-24 22:15:50.805638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.805786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.805813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.880 qpair failed and we were unable to recover it. 00:24:08.880 [2024-04-24 22:15:50.805968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.806155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.806182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.880 qpair failed and we were unable to recover it. 00:24:08.880 [2024-04-24 22:15:50.806313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.806475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.806503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.880 qpair failed and we were unable to recover it. 00:24:08.880 [2024-04-24 22:15:50.806690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.806873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.806900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.880 qpair failed and we were unable to recover it. 00:24:08.880 [2024-04-24 22:15:50.807064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.807255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.807282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.880 qpair failed and we were unable to recover it. 00:24:08.880 [2024-04-24 22:15:50.807474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.807659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.807687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.880 qpair failed and we were unable to recover it. 00:24:08.880 [2024-04-24 22:15:50.807858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.808014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.808041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.880 qpair failed and we were unable to recover it. 00:24:08.880 [2024-04-24 22:15:50.808229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.808421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.808450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.880 qpair failed and we were unable to recover it. 00:24:08.880 [2024-04-24 22:15:50.808639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.808803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.808830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.880 qpair failed and we were unable to recover it. 00:24:08.880 [2024-04-24 22:15:50.808981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.809141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.809168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.880 qpair failed and we were unable to recover it. 00:24:08.880 [2024-04-24 22:15:50.809343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.809525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.809553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.880 qpair failed and we were unable to recover it. 00:24:08.880 [2024-04-24 22:15:50.809716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.809901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.809928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.880 qpair failed and we were unable to recover it. 00:24:08.880 [2024-04-24 22:15:50.810066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.810223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.810250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.880 qpair failed and we were unable to recover it. 00:24:08.880 [2024-04-24 22:15:50.810409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.810597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.810625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.880 qpair failed and we were unable to recover it. 00:24:08.880 [2024-04-24 22:15:50.810789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.810942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.810969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.880 qpair failed and we were unable to recover it. 00:24:08.880 [2024-04-24 22:15:50.811138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.811331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.811358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.880 qpair failed and we were unable to recover it. 00:24:08.880 [2024-04-24 22:15:50.811507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.811669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.811696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.880 qpair failed and we were unable to recover it. 00:24:08.880 [2024-04-24 22:15:50.811860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.812033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.812060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.880 qpair failed and we were unable to recover it. 00:24:08.880 [2024-04-24 22:15:50.812246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.812409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.812437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.880 qpair failed and we were unable to recover it. 00:24:08.880 [2024-04-24 22:15:50.812601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.812733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.812760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.880 qpair failed and we were unable to recover it. 00:24:08.880 [2024-04-24 22:15:50.812933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.813114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.813141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.880 qpair failed and we were unable to recover it. 00:24:08.880 [2024-04-24 22:15:50.813278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.813473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.813501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.880 qpair failed and we were unable to recover it. 00:24:08.880 [2024-04-24 22:15:50.813692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.813824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.813852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.880 qpair failed and we were unable to recover it. 00:24:08.880 [2024-04-24 22:15:50.813983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.814146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.814174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.880 qpair failed and we were unable to recover it. 00:24:08.880 [2024-04-24 22:15:50.814342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.814489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.814517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.880 qpair failed and we were unable to recover it. 00:24:08.880 [2024-04-24 22:15:50.814658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.814823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.814852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.880 qpair failed and we were unable to recover it. 00:24:08.880 [2024-04-24 22:15:50.814990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.815145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.880 [2024-04-24 22:15:50.815172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.881 qpair failed and we were unable to recover it. 00:24:08.881 [2024-04-24 22:15:50.815335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.815476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.815504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.881 qpair failed and we were unable to recover it. 00:24:08.881 [2024-04-24 22:15:50.815635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.815794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.815821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.881 qpair failed and we were unable to recover it. 00:24:08.881 [2024-04-24 22:15:50.815945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.816133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.816159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.881 qpair failed and we were unable to recover it. 00:24:08.881 [2024-04-24 22:15:50.816315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.816500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.816528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.881 qpair failed and we were unable to recover it. 00:24:08.881 [2024-04-24 22:15:50.816664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.816823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.816850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.881 qpair failed and we were unable to recover it. 00:24:08.881 [2024-04-24 22:15:50.817024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.817174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.817202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.881 qpair failed and we were unable to recover it. 00:24:08.881 [2024-04-24 22:15:50.817392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.817554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.817582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.881 qpair failed and we were unable to recover it. 00:24:08.881 [2024-04-24 22:15:50.817721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.817858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.817885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.881 qpair failed and we were unable to recover it. 00:24:08.881 [2024-04-24 22:15:50.818020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.818188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.818215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.881 qpair failed and we were unable to recover it. 00:24:08.881 [2024-04-24 22:15:50.818345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.818542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.818570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.881 qpair failed and we were unable to recover it. 00:24:08.881 [2024-04-24 22:15:50.818756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.818912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.818939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.881 qpair failed and we were unable to recover it. 00:24:08.881 [2024-04-24 22:15:50.819127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.819290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.819318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.881 qpair failed and we were unable to recover it. 00:24:08.881 [2024-04-24 22:15:50.819469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.819630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.819658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.881 qpair failed and we were unable to recover it. 00:24:08.881 [2024-04-24 22:15:50.819853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.820011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.820040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.881 qpair failed and we were unable to recover it. 00:24:08.881 [2024-04-24 22:15:50.820205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.820362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.820389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.881 qpair failed and we were unable to recover it. 00:24:08.881 [2024-04-24 22:15:50.820534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.820719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.820748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.881 qpair failed and we were unable to recover it. 00:24:08.881 [2024-04-24 22:15:50.820933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.821100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.821127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.881 qpair failed and we were unable to recover it. 00:24:08.881 [2024-04-24 22:15:50.821284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.821467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.821496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.881 qpair failed and we were unable to recover it. 00:24:08.881 [2024-04-24 22:15:50.821638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.821827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.821855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.881 qpair failed and we were unable to recover it. 00:24:08.881 [2024-04-24 22:15:50.822020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.822154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.822181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.881 qpair failed and we were unable to recover it. 00:24:08.881 [2024-04-24 22:15:50.822320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.822507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.822535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.881 qpair failed and we were unable to recover it. 00:24:08.881 [2024-04-24 22:15:50.822668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.822834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.822860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.881 qpair failed and we were unable to recover it. 00:24:08.881 [2024-04-24 22:15:50.822988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.881 [2024-04-24 22:15:50.823148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.823175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.882 qpair failed and we were unable to recover it. 00:24:08.882 [2024-04-24 22:15:50.823335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.823521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.823550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.882 qpair failed and we were unable to recover it. 00:24:08.882 [2024-04-24 22:15:50.823675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.823838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.823865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.882 qpair failed and we were unable to recover it. 00:24:08.882 [2024-04-24 22:15:50.824059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.824221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.824249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.882 qpair failed and we were unable to recover it. 00:24:08.882 [2024-04-24 22:15:50.824380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.824551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.824579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.882 qpair failed and we were unable to recover it. 00:24:08.882 [2024-04-24 22:15:50.824733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.824892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.824920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.882 qpair failed and we were unable to recover it. 00:24:08.882 [2024-04-24 22:15:50.825074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.825194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.825221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.882 qpair failed and we were unable to recover it. 00:24:08.882 [2024-04-24 22:15:50.825348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.825506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.825535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.882 qpair failed and we were unable to recover it. 00:24:08.882 [2024-04-24 22:15:50.825729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.825920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.825947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.882 qpair failed and we were unable to recover it. 00:24:08.882 [2024-04-24 22:15:50.826107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.826273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.826300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.882 qpair failed and we were unable to recover it. 00:24:08.882 [2024-04-24 22:15:50.826433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.826585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.826612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.882 qpair failed and we were unable to recover it. 00:24:08.882 [2024-04-24 22:15:50.826761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.826948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.826975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.882 qpair failed and we were unable to recover it. 00:24:08.882 [2024-04-24 22:15:50.827160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.827322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.827350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.882 qpair failed and we were unable to recover it. 00:24:08.882 [2024-04-24 22:15:50.827497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.827633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.827660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.882 qpair failed and we were unable to recover it. 00:24:08.882 [2024-04-24 22:15:50.827837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.827997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.828024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.882 qpair failed and we were unable to recover it. 00:24:08.882 [2024-04-24 22:15:50.828211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.829020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.829054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.882 qpair failed and we were unable to recover it. 00:24:08.882 [2024-04-24 22:15:50.829239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.829412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.829440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.882 qpair failed and we were unable to recover it. 00:24:08.882 [2024-04-24 22:15:50.829582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.829709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.829737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.882 qpair failed and we were unable to recover it. 00:24:08.882 [2024-04-24 22:15:50.833416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.833564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.833593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.882 qpair failed and we were unable to recover it. 00:24:08.882 [2024-04-24 22:15:50.833794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.833970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.833997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.882 qpair failed and we were unable to recover it. 00:24:08.882 [2024-04-24 22:15:50.834128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.834273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.834300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.882 qpair failed and we were unable to recover it. 00:24:08.882 [2024-04-24 22:15:50.834431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.834570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.834597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.882 qpair failed and we were unable to recover it. 00:24:08.882 [2024-04-24 22:15:50.834744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.834898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.834925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.882 qpair failed and we were unable to recover it. 00:24:08.882 [2024-04-24 22:15:50.835075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.835273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.835300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.882 qpair failed and we were unable to recover it. 00:24:08.882 [2024-04-24 22:15:50.835456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.835595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.835622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.882 qpair failed and we were unable to recover it. 00:24:08.882 [2024-04-24 22:15:50.835803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.835974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.836001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.882 qpair failed and we were unable to recover it. 00:24:08.882 [2024-04-24 22:15:50.836175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.836340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.836367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.882 qpair failed and we were unable to recover it. 00:24:08.882 [2024-04-24 22:15:50.836533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.836686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.836713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.882 qpair failed and we were unable to recover it. 00:24:08.882 [2024-04-24 22:15:50.836925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.837111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.837138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.882 qpair failed and we were unable to recover it. 00:24:08.882 [2024-04-24 22:15:50.837325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.882 [2024-04-24 22:15:50.837497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.837526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.883 qpair failed and we were unable to recover it. 00:24:08.883 [2024-04-24 22:15:50.837700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.837843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.837871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.883 qpair failed and we were unable to recover it. 00:24:08.883 [2024-04-24 22:15:50.838064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.838208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.838235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.883 qpair failed and we were unable to recover it. 00:24:08.883 [2024-04-24 22:15:50.838388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.838535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.838562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.883 qpair failed and we were unable to recover it. 00:24:08.883 [2024-04-24 22:15:50.839354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.839547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.839577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.883 qpair failed and we were unable to recover it. 00:24:08.883 [2024-04-24 22:15:50.839713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.839905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.839932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.883 qpair failed and we were unable to recover it. 00:24:08.883 [2024-04-24 22:15:50.840067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.840226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.840258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.883 qpair failed and we were unable to recover it. 00:24:08.883 [2024-04-24 22:15:50.840428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.840564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.840592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.883 qpair failed and we were unable to recover it. 00:24:08.883 [2024-04-24 22:15:50.840716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.840876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.840904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.883 qpair failed and we were unable to recover it. 00:24:08.883 [2024-04-24 22:15:50.841058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.841189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.841217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.883 qpair failed and we were unable to recover it. 00:24:08.883 [2024-04-24 22:15:50.841343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.841468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.841496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.883 qpair failed and we were unable to recover it. 00:24:08.883 [2024-04-24 22:15:50.841658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.841805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.841832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.883 qpair failed and we were unable to recover it. 00:24:08.883 [2024-04-24 22:15:50.841992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.842143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.842170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.883 qpair failed and we were unable to recover it. 00:24:08.883 [2024-04-24 22:15:50.842335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.842505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.842533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.883 qpair failed and we were unable to recover it. 00:24:08.883 [2024-04-24 22:15:50.842705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.842878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.842905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.883 qpair failed and we were unable to recover it. 00:24:08.883 [2024-04-24 22:15:50.843100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.843616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.843658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.883 qpair failed and we were unable to recover it. 00:24:08.883 [2024-04-24 22:15:50.843861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.844059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.844092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.883 qpair failed and we were unable to recover it. 00:24:08.883 [2024-04-24 22:15:50.844227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.844378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.844416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.883 qpair failed and we were unable to recover it. 00:24:08.883 [2024-04-24 22:15:50.844553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.844700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.844728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.883 qpair failed and we were unable to recover it. 00:24:08.883 [2024-04-24 22:15:50.844943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.845113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.845140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.883 qpair failed and we were unable to recover it. 00:24:08.883 [2024-04-24 22:15:50.845285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.845434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.845463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.883 qpair failed and we were unable to recover it. 00:24:08.883 [2024-04-24 22:15:50.845599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.845747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.845775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.883 qpair failed and we were unable to recover it. 00:24:08.883 [2024-04-24 22:15:50.845946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.846131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.846158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.883 qpair failed and we were unable to recover it. 00:24:08.883 [2024-04-24 22:15:50.846335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.846483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.846511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.883 qpair failed and we were unable to recover it. 00:24:08.883 [2024-04-24 22:15:50.846669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.846794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.846832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.883 qpair failed and we were unable to recover it. 00:24:08.883 [2024-04-24 22:15:50.846973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.847161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.847188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.883 qpair failed and we were unable to recover it. 00:24:08.883 [2024-04-24 22:15:50.847380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.847520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.847556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.883 qpair failed and we were unable to recover it. 00:24:08.883 [2024-04-24 22:15:50.847714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.847857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.847884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.883 qpair failed and we were unable to recover it. 00:24:08.883 [2024-04-24 22:15:50.848045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.848176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.883 [2024-04-24 22:15:50.848204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.883 qpair failed and we were unable to recover it. 00:24:08.883 [2024-04-24 22:15:50.848372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.848517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.848545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.884 qpair failed and we were unable to recover it. 00:24:08.884 [2024-04-24 22:15:50.848690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.848861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.848888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.884 qpair failed and we were unable to recover it. 00:24:08.884 [2024-04-24 22:15:50.849073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.849264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.849291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.884 qpair failed and we were unable to recover it. 00:24:08.884 [2024-04-24 22:15:50.849458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.849597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.849624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.884 qpair failed and we were unable to recover it. 00:24:08.884 [2024-04-24 22:15:50.849783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.850879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.850912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.884 qpair failed and we were unable to recover it. 00:24:08.884 [2024-04-24 22:15:50.851154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.851296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.851324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.884 qpair failed and we were unable to recover it. 00:24:08.884 [2024-04-24 22:15:50.851495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.851660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.851688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.884 qpair failed and we were unable to recover it. 00:24:08.884 [2024-04-24 22:15:50.851850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.852008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.852041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.884 qpair failed and we were unable to recover it. 00:24:08.884 [2024-04-24 22:15:50.852178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.852315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.852342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.884 qpair failed and we were unable to recover it. 00:24:08.884 [2024-04-24 22:15:50.852529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.852678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.852705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.884 qpair failed and we were unable to recover it. 00:24:08.884 [2024-04-24 22:15:50.852921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.853056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.853083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.884 qpair failed and we were unable to recover it. 00:24:08.884 [2024-04-24 22:15:50.853268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.853392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.853434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.884 qpair failed and we were unable to recover it. 00:24:08.884 [2024-04-24 22:15:50.853582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.853720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.853747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.884 qpair failed and we were unable to recover it. 00:24:08.884 [2024-04-24 22:15:50.853937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.854060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.854087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.884 qpair failed and we were unable to recover it. 00:24:08.884 [2024-04-24 22:15:50.854257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.854419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.854448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.884 qpair failed and we were unable to recover it. 00:24:08.884 [2024-04-24 22:15:50.854606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.854766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.854793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.884 qpair failed and we were unable to recover it. 00:24:08.884 [2024-04-24 22:15:50.854980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.855159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.855186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.884 qpair failed and we were unable to recover it. 00:24:08.884 [2024-04-24 22:15:50.855401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.855546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.855573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.884 qpair failed and we were unable to recover it. 00:24:08.884 [2024-04-24 22:15:50.855792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.855956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.855983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.884 qpair failed and we were unable to recover it. 00:24:08.884 [2024-04-24 22:15:50.856111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.856281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.856308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.884 qpair failed and we were unable to recover it. 00:24:08.884 [2024-04-24 22:15:50.856505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.856646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.856673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.884 qpair failed and we were unable to recover it. 00:24:08.884 [2024-04-24 22:15:50.856856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.857026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.857053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.884 qpair failed and we were unable to recover it. 00:24:08.884 [2024-04-24 22:15:50.857230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.857392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.857427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.884 qpair failed and we were unable to recover it. 00:24:08.884 [2024-04-24 22:15:50.857585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.857777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.857803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.884 qpair failed and we were unable to recover it. 00:24:08.884 [2024-04-24 22:15:50.857993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.858163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.858190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.884 qpair failed and we were unable to recover it. 00:24:08.884 [2024-04-24 22:15:50.858373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.858519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.858547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.884 qpair failed and we were unable to recover it. 00:24:08.884 [2024-04-24 22:15:50.858685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.858827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.858854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.884 qpair failed and we were unable to recover it. 00:24:08.884 [2024-04-24 22:15:50.858996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.859164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.859191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.884 qpair failed and we were unable to recover it. 00:24:08.884 [2024-04-24 22:15:50.859365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.859522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.884 [2024-04-24 22:15:50.859550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.884 qpair failed and we were unable to recover it. 00:24:08.884 [2024-04-24 22:15:50.859744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.859927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.859954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.885 qpair failed and we were unable to recover it. 00:24:08.885 [2024-04-24 22:15:50.860115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.860323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.860350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.885 qpair failed and we were unable to recover it. 00:24:08.885 [2024-04-24 22:15:50.860495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.860633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.860660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.885 qpair failed and we were unable to recover it. 00:24:08.885 [2024-04-24 22:15:50.860828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.860977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.861013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.885 qpair failed and we were unable to recover it. 00:24:08.885 [2024-04-24 22:15:50.861202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.861369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.861405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.885 qpair failed and we were unable to recover it. 00:24:08.885 [2024-04-24 22:15:50.861544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.861697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.861724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.885 qpair failed and we were unable to recover it. 00:24:08.885 [2024-04-24 22:15:50.861873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.862024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.862051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.885 qpair failed and we were unable to recover it. 00:24:08.885 [2024-04-24 22:15:50.862217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.862381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.862429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.885 qpair failed and we were unable to recover it. 00:24:08.885 [2024-04-24 22:15:50.862566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.862717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.862744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.885 qpair failed and we were unable to recover it. 00:24:08.885 [2024-04-24 22:15:50.862909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.863097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.863124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.885 qpair failed and we were unable to recover it. 00:24:08.885 [2024-04-24 22:15:50.863341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.863495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.863525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.885 qpair failed and we were unable to recover it. 00:24:08.885 [2024-04-24 22:15:50.863662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.863870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.863897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.885 qpair failed and we were unable to recover it. 00:24:08.885 [2024-04-24 22:15:50.864047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.864210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.864237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.885 qpair failed and we were unable to recover it. 00:24:08.885 [2024-04-24 22:15:50.864441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.864581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.864609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.885 qpair failed and we were unable to recover it. 00:24:08.885 [2024-04-24 22:15:50.864808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.864998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.865025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.885 qpair failed and we were unable to recover it. 00:24:08.885 [2024-04-24 22:15:50.865186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.865343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.865370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.885 qpair failed and we were unable to recover it. 00:24:08.885 [2024-04-24 22:15:50.865514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.865649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.865676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.885 qpair failed and we were unable to recover it. 00:24:08.885 [2024-04-24 22:15:50.865855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.866005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.866032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.885 qpair failed and we were unable to recover it. 00:24:08.885 [2024-04-24 22:15:50.866241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.866372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.866405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.885 qpair failed and we were unable to recover it. 00:24:08.885 [2024-04-24 22:15:50.866550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.866706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.866733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.885 qpair failed and we were unable to recover it. 00:24:08.885 [2024-04-24 22:15:50.866888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.867027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.867054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.885 qpair failed and we were unable to recover it. 00:24:08.885 [2024-04-24 22:15:50.867226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.867369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.867406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.885 qpair failed and we were unable to recover it. 00:24:08.885 [2024-04-24 22:15:50.867548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.867714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.867741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.885 qpair failed and we were unable to recover it. 00:24:08.885 [2024-04-24 22:15:50.867923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.868110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.885 [2024-04-24 22:15:50.868146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.885 qpair failed and we were unable to recover it. 00:24:08.885 [2024-04-24 22:15:50.868323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.868499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.868527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.886 qpair failed and we were unable to recover it. 00:24:08.886 [2024-04-24 22:15:50.868654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.868833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.868860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.886 qpair failed and we were unable to recover it. 00:24:08.886 [2024-04-24 22:15:50.869017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.869208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.869235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.886 qpair failed and we were unable to recover it. 00:24:08.886 [2024-04-24 22:15:50.869407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.869542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.869569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.886 qpair failed and we were unable to recover it. 00:24:08.886 [2024-04-24 22:15:50.869750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.869927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.869954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.886 qpair failed and we were unable to recover it. 00:24:08.886 [2024-04-24 22:15:50.870988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.871178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.871207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.886 qpair failed and we were unable to recover it. 00:24:08.886 [2024-04-24 22:15:50.871405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.871564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.871591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.886 qpair failed and we were unable to recover it. 00:24:08.886 [2024-04-24 22:15:50.871724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.871878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.871906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.886 qpair failed and we were unable to recover it. 00:24:08.886 [2024-04-24 22:15:50.872037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.872189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.872216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.886 qpair failed and we were unable to recover it. 00:24:08.886 [2024-04-24 22:15:50.872412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.872550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.872578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.886 qpair failed and we were unable to recover it. 00:24:08.886 [2024-04-24 22:15:50.872771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.872973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.873001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.886 qpair failed and we were unable to recover it. 00:24:08.886 [2024-04-24 22:15:50.873175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.873361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.873390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.886 qpair failed and we were unable to recover it. 00:24:08.886 [2024-04-24 22:15:50.873561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.873720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.873748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.886 qpair failed and we were unable to recover it. 00:24:08.886 [2024-04-24 22:15:50.873896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.874105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.874132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.886 qpair failed and we were unable to recover it. 00:24:08.886 [2024-04-24 22:15:50.874278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.874436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.874464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.886 qpair failed and we were unable to recover it. 00:24:08.886 [2024-04-24 22:15:50.874596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.874815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.874842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.886 qpair failed and we were unable to recover it. 00:24:08.886 [2024-04-24 22:15:50.874968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.875117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.875144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.886 qpair failed and we were unable to recover it. 00:24:08.886 [2024-04-24 22:15:50.875302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.875467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.875496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.886 qpair failed and we were unable to recover it. 00:24:08.886 [2024-04-24 22:15:50.875647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.875842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.875869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.886 qpair failed and we were unable to recover it. 00:24:08.886 [2024-04-24 22:15:50.876040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.876223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.876250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.886 qpair failed and we were unable to recover it. 00:24:08.886 [2024-04-24 22:15:50.876412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.876549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.876576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.886 qpair failed and we were unable to recover it. 00:24:08.886 [2024-04-24 22:15:50.876738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.876879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.876907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.886 qpair failed and we were unable to recover it. 00:24:08.886 [2024-04-24 22:15:50.877036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.877233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.877260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.886 qpair failed and we were unable to recover it. 00:24:08.886 [2024-04-24 22:15:50.877407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.877549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.877577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.886 qpair failed and we were unable to recover it. 00:24:08.886 [2024-04-24 22:15:50.877724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.877886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.877913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.886 qpair failed and we were unable to recover it. 00:24:08.886 [2024-04-24 22:15:50.878064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.878231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.878258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.886 qpair failed and we were unable to recover it. 00:24:08.886 [2024-04-24 22:15:50.878427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.878561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.878589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.886 qpair failed and we were unable to recover it. 00:24:08.886 [2024-04-24 22:15:50.878743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.878881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.878908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.886 qpair failed and we were unable to recover it. 00:24:08.886 [2024-04-24 22:15:50.879041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.879203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.886 [2024-04-24 22:15:50.879230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.886 qpair failed and we were unable to recover it. 00:24:08.887 [2024-04-24 22:15:50.879379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.879517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.879545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.887 qpair failed and we were unable to recover it. 00:24:08.887 [2024-04-24 22:15:50.879746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.879902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.879929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.887 qpair failed and we were unable to recover it. 00:24:08.887 [2024-04-24 22:15:50.880091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.880221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.880250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.887 qpair failed and we were unable to recover it. 00:24:08.887 [2024-04-24 22:15:50.880455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.880617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.880644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.887 qpair failed and we were unable to recover it. 00:24:08.887 [2024-04-24 22:15:50.880807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.880945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.880972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.887 qpair failed and we were unable to recover it. 00:24:08.887 [2024-04-24 22:15:50.881138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.881293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.881331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.887 qpair failed and we were unable to recover it. 00:24:08.887 [2024-04-24 22:15:50.881484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.881615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.881642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.887 qpair failed and we were unable to recover it. 00:24:08.887 [2024-04-24 22:15:50.881796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.881925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.881963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.887 qpair failed and we were unable to recover it. 00:24:08.887 [2024-04-24 22:15:50.882085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.882245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.882272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.887 qpair failed and we were unable to recover it. 00:24:08.887 [2024-04-24 22:15:50.882415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.882558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.882585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.887 qpair failed and we were unable to recover it. 00:24:08.887 [2024-04-24 22:15:50.882715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.882869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.882897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.887 qpair failed and we were unable to recover it. 00:24:08.887 [2024-04-24 22:15:50.883083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.883205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.883233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.887 qpair failed and we were unable to recover it. 00:24:08.887 [2024-04-24 22:15:50.883363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.883509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.883537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.887 qpair failed and we were unable to recover it. 00:24:08.887 [2024-04-24 22:15:50.883694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.883858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.883885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.887 qpair failed and we were unable to recover it. 00:24:08.887 [2024-04-24 22:15:50.884042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.884203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.884230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.887 qpair failed and we were unable to recover it. 00:24:08.887 [2024-04-24 22:15:50.884387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.884535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.884563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.887 qpair failed and we were unable to recover it. 00:24:08.887 [2024-04-24 22:15:50.884724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.884861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.884888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.887 qpair failed and we were unable to recover it. 00:24:08.887 [2024-04-24 22:15:50.885017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.885178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.885206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.887 qpair failed and we were unable to recover it. 00:24:08.887 [2024-04-24 22:15:50.885364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.885506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.885534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.887 qpair failed and we were unable to recover it. 00:24:08.887 [2024-04-24 22:15:50.885673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.885836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.885863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.887 qpair failed and we were unable to recover it. 00:24:08.887 [2024-04-24 22:15:50.886049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.886171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.886198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.887 qpair failed and we were unable to recover it. 00:24:08.887 [2024-04-24 22:15:50.886362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.886551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.886581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.887 qpair failed and we were unable to recover it. 00:24:08.887 [2024-04-24 22:15:50.886714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.886871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.886899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.887 qpair failed and we were unable to recover it. 00:24:08.887 [2024-04-24 22:15:50.887051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.887177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.887204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.887 qpair failed and we were unable to recover it. 00:24:08.887 [2024-04-24 22:15:50.887355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.887499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.887527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.887 qpair failed and we were unable to recover it. 00:24:08.887 [2024-04-24 22:15:50.887659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.887787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.887814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.887 qpair failed and we were unable to recover it. 00:24:08.887 [2024-04-24 22:15:50.887938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.888102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.888130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.887 qpair failed and we were unable to recover it. 00:24:08.887 [2024-04-24 22:15:50.888288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.888418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.888447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.887 qpair failed and we were unable to recover it. 00:24:08.887 [2024-04-24 22:15:50.888588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.888717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.887 [2024-04-24 22:15:50.888745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.887 qpair failed and we were unable to recover it. 00:24:08.888 [2024-04-24 22:15:50.888906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.889095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.889122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.888 qpair failed and we were unable to recover it. 00:24:08.888 [2024-04-24 22:15:50.889278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.889440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.889469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.888 qpair failed and we were unable to recover it. 00:24:08.888 [2024-04-24 22:15:50.889597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.889781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.889808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.888 qpair failed and we were unable to recover it. 00:24:08.888 [2024-04-24 22:15:50.889943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.890092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.890120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.888 qpair failed and we were unable to recover it. 00:24:08.888 [2024-04-24 22:15:50.890303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.890431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.890459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.888 qpair failed and we were unable to recover it. 00:24:08.888 [2024-04-24 22:15:50.890588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.890754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.890781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.888 qpair failed and we were unable to recover it. 00:24:08.888 [2024-04-24 22:15:50.890912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.891032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.891059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.888 qpair failed and we were unable to recover it. 00:24:08.888 [2024-04-24 22:15:50.891230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.891371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.891405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.888 qpair failed and we were unable to recover it. 00:24:08.888 [2024-04-24 22:15:50.891540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.891700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.891727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.888 qpair failed and we were unable to recover it. 00:24:08.888 [2024-04-24 22:15:50.891866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.891986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.892013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.888 qpair failed and we were unable to recover it. 00:24:08.888 [2024-04-24 22:15:50.892164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.892310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.892338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.888 qpair failed and we were unable to recover it. 00:24:08.888 [2024-04-24 22:15:50.892467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.892599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.892626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.888 qpair failed and we were unable to recover it. 00:24:08.888 [2024-04-24 22:15:50.892780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.892911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.892938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.888 qpair failed and we were unable to recover it. 00:24:08.888 [2024-04-24 22:15:50.893119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.893257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.893284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.888 qpair failed and we were unable to recover it. 00:24:08.888 [2024-04-24 22:15:50.893448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.893578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.893605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.888 qpair failed and we were unable to recover it. 00:24:08.888 [2024-04-24 22:15:50.893742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.893923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.893950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.888 qpair failed and we were unable to recover it. 00:24:08.888 [2024-04-24 22:15:50.894107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.894255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.894283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.888 qpair failed and we were unable to recover it. 00:24:08.888 [2024-04-24 22:15:50.894414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.894546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.894573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.888 qpair failed and we were unable to recover it. 00:24:08.888 [2024-04-24 22:15:50.894722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.894904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.894931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.888 qpair failed and we were unable to recover it. 00:24:08.888 [2024-04-24 22:15:50.895118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.895273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.895299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.888 qpair failed and we were unable to recover it. 00:24:08.888 [2024-04-24 22:15:50.895454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.895599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.895626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.888 qpair failed and we were unable to recover it. 00:24:08.888 [2024-04-24 22:15:50.895782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.895910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.895936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.888 qpair failed and we were unable to recover it. 00:24:08.888 [2024-04-24 22:15:50.896099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.896285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.896312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.888 qpair failed and we were unable to recover it. 00:24:08.888 [2024-04-24 22:15:50.896442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.896577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.896605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.888 qpair failed and we were unable to recover it. 00:24:08.888 [2024-04-24 22:15:50.896760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.896878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.896904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.888 qpair failed and we were unable to recover it. 00:24:08.888 [2024-04-24 22:15:50.897087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.897247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.897273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.888 qpair failed and we were unable to recover it. 00:24:08.888 [2024-04-24 22:15:50.897429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.897561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.897589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.888 qpair failed and we were unable to recover it. 00:24:08.888 [2024-04-24 22:15:50.897714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.897889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.897921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.888 qpair failed and we were unable to recover it. 00:24:08.888 [2024-04-24 22:15:50.898074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.888 [2024-04-24 22:15:50.898229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.898255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.889 qpair failed and we were unable to recover it. 00:24:08.889 [2024-04-24 22:15:50.898436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.898565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.898591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.889 qpair failed and we were unable to recover it. 00:24:08.889 [2024-04-24 22:15:50.898744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.898898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.898927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.889 qpair failed and we were unable to recover it. 00:24:08.889 [2024-04-24 22:15:50.899086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.899274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.899300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.889 qpair failed and we were unable to recover it. 00:24:08.889 [2024-04-24 22:15:50.899440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.899566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.899592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.889 qpair failed and we were unable to recover it. 00:24:08.889 [2024-04-24 22:15:50.899753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.899908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.899935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.889 qpair failed and we were unable to recover it. 00:24:08.889 [2024-04-24 22:15:50.900089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.900240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.900266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.889 qpair failed and we were unable to recover it. 00:24:08.889 [2024-04-24 22:15:50.900413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.900553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.900579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.889 qpair failed and we were unable to recover it. 00:24:08.889 [2024-04-24 22:15:50.900750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.900896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.900923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.889 qpair failed and we were unable to recover it. 00:24:08.889 [2024-04-24 22:15:50.901083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.901237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.901268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.889 qpair failed and we were unable to recover it. 00:24:08.889 [2024-04-24 22:15:50.901426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.901560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.901588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.889 qpair failed and we were unable to recover it. 00:24:08.889 [2024-04-24 22:15:50.901707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.901835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.901862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.889 qpair failed and we were unable to recover it. 00:24:08.889 [2024-04-24 22:15:50.902019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.902172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.902199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.889 qpair failed and we were unable to recover it. 00:24:08.889 [2024-04-24 22:15:50.902321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.902469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.902497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.889 qpair failed and we were unable to recover it. 00:24:08.889 [2024-04-24 22:15:50.902633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.902791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.902817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.889 qpair failed and we were unable to recover it. 00:24:08.889 [2024-04-24 22:15:50.902978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.903104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.903131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.889 qpair failed and we were unable to recover it. 00:24:08.889 [2024-04-24 22:15:50.903308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.903449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.903477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.889 qpair failed and we were unable to recover it. 00:24:08.889 [2024-04-24 22:15:50.903636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.903765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.903792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.889 qpair failed and we were unable to recover it. 00:24:08.889 [2024-04-24 22:15:50.903957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.904119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.904145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.889 qpair failed and we were unable to recover it. 00:24:08.889 [2024-04-24 22:15:50.904270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.904448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.904481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.889 qpair failed and we were unable to recover it. 00:24:08.889 [2024-04-24 22:15:50.904627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.904781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.904808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.889 qpair failed and we were unable to recover it. 00:24:08.889 [2024-04-24 22:15:50.904992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.905134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.905160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.889 qpair failed and we were unable to recover it. 00:24:08.889 [2024-04-24 22:15:50.905319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.905477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.905505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.889 qpair failed and we were unable to recover it. 00:24:08.889 [2024-04-24 22:15:50.905666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.905818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.905845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.889 qpair failed and we were unable to recover it. 00:24:08.889 [2024-04-24 22:15:50.905973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.906131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.906157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.889 qpair failed and we were unable to recover it. 00:24:08.889 [2024-04-24 22:15:50.906275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.906413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.906441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.889 qpair failed and we were unable to recover it. 00:24:08.889 [2024-04-24 22:15:50.906561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.906707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.906735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.889 qpair failed and we were unable to recover it. 00:24:08.889 [2024-04-24 22:15:50.906883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.907065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.907092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.889 qpair failed and we were unable to recover it. 00:24:08.889 [2024-04-24 22:15:50.907224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.907356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.889 [2024-04-24 22:15:50.907382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.890 qpair failed and we were unable to recover it. 00:24:08.890 [2024-04-24 22:15:50.907524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.907647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.907673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.890 qpair failed and we were unable to recover it. 00:24:08.890 [2024-04-24 22:15:50.907833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.908019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.908046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.890 qpair failed and we were unable to recover it. 00:24:08.890 [2024-04-24 22:15:50.908181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.908347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.908374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.890 qpair failed and we were unable to recover it. 00:24:08.890 [2024-04-24 22:15:50.908546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.908675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.908701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.890 qpair failed and we were unable to recover it. 00:24:08.890 [2024-04-24 22:15:50.908888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.909071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.909098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.890 qpair failed and we were unable to recover it. 00:24:08.890 [2024-04-24 22:15:50.909236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.909389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.909425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.890 qpair failed and we were unable to recover it. 00:24:08.890 [2024-04-24 22:15:50.909550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.909734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.909760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.890 qpair failed and we were unable to recover it. 00:24:08.890 [2024-04-24 22:15:50.909933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.910087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.910114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.890 qpair failed and we were unable to recover it. 00:24:08.890 [2024-04-24 22:15:50.910239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.910371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.910406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.890 qpair failed and we were unable to recover it. 00:24:08.890 [2024-04-24 22:15:50.910567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.910757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.910783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.890 qpair failed and we were unable to recover it. 00:24:08.890 [2024-04-24 22:15:50.910928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.911110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.911136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.890 qpair failed and we were unable to recover it. 00:24:08.890 [2024-04-24 22:15:50.911311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.911455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.911484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.890 qpair failed and we were unable to recover it. 00:24:08.890 [2024-04-24 22:15:50.911634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.911766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.911792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.890 qpair failed and we were unable to recover it. 00:24:08.890 [2024-04-24 22:15:50.911950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.912076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.912102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.890 qpair failed and we were unable to recover it. 00:24:08.890 [2024-04-24 22:15:50.912249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.912414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.912443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.890 qpair failed and we were unable to recover it. 00:24:08.890 [2024-04-24 22:15:50.912593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.912760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.912787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.890 qpair failed and we were unable to recover it. 00:24:08.890 [2024-04-24 22:15:50.912941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.913078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.913104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.890 qpair failed and we were unable to recover it. 00:24:08.890 [2024-04-24 22:15:50.913233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.913351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.913377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.890 qpair failed and we were unable to recover it. 00:24:08.890 [2024-04-24 22:15:50.913546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.913703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.913729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.890 qpair failed and we were unable to recover it. 00:24:08.890 [2024-04-24 22:15:50.913887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.914050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.914077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.890 qpair failed and we were unable to recover it. 00:24:08.890 [2024-04-24 22:15:50.914208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.914337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.914363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.890 qpair failed and we were unable to recover it. 00:24:08.890 [2024-04-24 22:15:50.914502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.914628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.914654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.890 qpair failed and we were unable to recover it. 00:24:08.890 [2024-04-24 22:15:50.915409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.915583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.915612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.890 qpair failed and we were unable to recover it. 00:24:08.890 [2024-04-24 22:15:50.916325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.916497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.890 [2024-04-24 22:15:50.916527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.891 qpair failed and we were unable to recover it. 00:24:08.891 [2024-04-24 22:15:50.917216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.917403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.917432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.891 qpair failed and we were unable to recover it. 00:24:08.891 [2024-04-24 22:15:50.917574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.917725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.917752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.891 qpair failed and we were unable to recover it. 00:24:08.891 [2024-04-24 22:15:50.917923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.918056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.918084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.891 qpair failed and we were unable to recover it. 00:24:08.891 [2024-04-24 22:15:50.918214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.918345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.918373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.891 qpair failed and we were unable to recover it. 00:24:08.891 [2024-04-24 22:15:50.918535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.918663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.918690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.891 qpair failed and we were unable to recover it. 00:24:08.891 [2024-04-24 22:15:50.918852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.919063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.919089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.891 qpair failed and we were unable to recover it. 00:24:08.891 [2024-04-24 22:15:50.919243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.919374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.919421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.891 qpair failed and we were unable to recover it. 00:24:08.891 [2024-04-24 22:15:50.919560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.919726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.919754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.891 qpair failed and we were unable to recover it. 00:24:08.891 [2024-04-24 22:15:50.919888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.920061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.920087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.891 qpair failed and we were unable to recover it. 00:24:08.891 [2024-04-24 22:15:50.920224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.920349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.920375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.891 qpair failed and we were unable to recover it. 00:24:08.891 [2024-04-24 22:15:50.920527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.920690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.920717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.891 qpair failed and we were unable to recover it. 00:24:08.891 [2024-04-24 22:15:50.920889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.921076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.921102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.891 qpair failed and we were unable to recover it. 00:24:08.891 [2024-04-24 22:15:50.921243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.921410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.921439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.891 qpair failed and we were unable to recover it. 00:24:08.891 [2024-04-24 22:15:50.921589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.921804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.921831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.891 qpair failed and we were unable to recover it. 00:24:08.891 [2024-04-24 22:15:50.921990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.922149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.922176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.891 qpair failed and we were unable to recover it. 00:24:08.891 [2024-04-24 22:15:50.922337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.922477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.922505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.891 qpair failed and we were unable to recover it. 00:24:08.891 [2024-04-24 22:15:50.922644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.922777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.922803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.891 qpair failed and we were unable to recover it. 00:24:08.891 [2024-04-24 22:15:50.923015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.923172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.923199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.891 qpair failed and we were unable to recover it. 00:24:08.891 [2024-04-24 22:15:50.923379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.923512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.923540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.891 qpair failed and we were unable to recover it. 00:24:08.891 [2024-04-24 22:15:50.923705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.923879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.923905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.891 qpair failed and we were unable to recover it. 00:24:08.891 [2024-04-24 22:15:50.924062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.924221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.924248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.891 qpair failed and we were unable to recover it. 00:24:08.891 [2024-04-24 22:15:50.924389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.924541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.924568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.891 qpair failed and we were unable to recover it. 00:24:08.891 [2024-04-24 22:15:50.924736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.924865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.924892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.891 qpair failed and we were unable to recover it. 00:24:08.891 [2024-04-24 22:15:50.925052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.925237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.925264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.891 qpair failed and we were unable to recover it. 00:24:08.891 [2024-04-24 22:15:50.925419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.925563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.925590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.891 qpair failed and we were unable to recover it. 00:24:08.891 [2024-04-24 22:15:50.925804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.925955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.925981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.891 qpair failed and we were unable to recover it. 00:24:08.891 [2024-04-24 22:15:50.926111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.926262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.926290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.891 qpair failed and we were unable to recover it. 00:24:08.891 [2024-04-24 22:15:50.926440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.926567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.926594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.891 qpair failed and we were unable to recover it. 00:24:08.891 [2024-04-24 22:15:50.926723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.891 [2024-04-24 22:15:50.926902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.926930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.892 qpair failed and we were unable to recover it. 00:24:08.892 [2024-04-24 22:15:50.927096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.927256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.927284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.892 qpair failed and we were unable to recover it. 00:24:08.892 [2024-04-24 22:15:50.927413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.927560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.927587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.892 qpair failed and we were unable to recover it. 00:24:08.892 [2024-04-24 22:15:50.927771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.927953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.927980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.892 qpair failed and we were unable to recover it. 00:24:08.892 [2024-04-24 22:15:50.928139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.928273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.928300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.892 qpair failed and we were unable to recover it. 00:24:08.892 [2024-04-24 22:15:50.928431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.928565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.928591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.892 qpair failed and we were unable to recover it. 00:24:08.892 [2024-04-24 22:15:50.928733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.928918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.928945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.892 qpair failed and we were unable to recover it. 00:24:08.892 [2024-04-24 22:15:50.929104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.929232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.929259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.892 qpair failed and we were unable to recover it. 00:24:08.892 [2024-04-24 22:15:50.929410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.929545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.929573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.892 qpair failed and we were unable to recover it. 00:24:08.892 [2024-04-24 22:15:50.929762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.929922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.929950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.892 qpair failed and we were unable to recover it. 00:24:08.892 [2024-04-24 22:15:50.930103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.930259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.930286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.892 qpair failed and we were unable to recover it. 00:24:08.892 [2024-04-24 22:15:50.930430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.930565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.930593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.892 qpair failed and we were unable to recover it. 00:24:08.892 [2024-04-24 22:15:50.930756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.930913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.930940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.892 qpair failed and we were unable to recover it. 00:24:08.892 [2024-04-24 22:15:50.931103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.931260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.931287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.892 qpair failed and we were unable to recover it. 00:24:08.892 [2024-04-24 22:15:50.931445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.931598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.931625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.892 qpair failed and we were unable to recover it. 00:24:08.892 [2024-04-24 22:15:50.931781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.931946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.931973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.892 qpair failed and we were unable to recover it. 00:24:08.892 [2024-04-24 22:15:50.932132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.932266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.932293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.892 qpair failed and we were unable to recover it. 00:24:08.892 [2024-04-24 22:15:50.932451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.932585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.932611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.892 qpair failed and we were unable to recover it. 00:24:08.892 [2024-04-24 22:15:50.932760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.932888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.932915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.892 qpair failed and we were unable to recover it. 00:24:08.892 [2024-04-24 22:15:50.933055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.933215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.933242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.892 qpair failed and we were unable to recover it. 00:24:08.892 [2024-04-24 22:15:50.933377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.933543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.933570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.892 qpair failed and we were unable to recover it. 00:24:08.892 [2024-04-24 22:15:50.933742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.933919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.933946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.892 qpair failed and we were unable to recover it. 00:24:08.892 [2024-04-24 22:15:50.934099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.934259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.934285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.892 qpair failed and we were unable to recover it. 00:24:08.892 [2024-04-24 22:15:50.934430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.934613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.934641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.892 qpair failed and we were unable to recover it. 00:24:08.892 [2024-04-24 22:15:50.934799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.934972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.934998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.892 qpair failed and we were unable to recover it. 00:24:08.892 [2024-04-24 22:15:50.935138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.935294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.935321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.892 qpair failed and we were unable to recover it. 00:24:08.892 [2024-04-24 22:15:50.935448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.935613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.935640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.892 qpair failed and we were unable to recover it. 00:24:08.892 [2024-04-24 22:15:50.935775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.935957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.935984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.892 qpair failed and we were unable to recover it. 00:24:08.892 [2024-04-24 22:15:50.936155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.936286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.892 [2024-04-24 22:15:50.936312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.892 qpair failed and we were unable to recover it. 00:24:08.892 [2024-04-24 22:15:50.936498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.936633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.936660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.893 qpair failed and we were unable to recover it. 00:24:08.893 [2024-04-24 22:15:50.936833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.936997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.937024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.893 qpair failed and we were unable to recover it. 00:24:08.893 [2024-04-24 22:15:50.937177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.937341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.937368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.893 qpair failed and we were unable to recover it. 00:24:08.893 [2024-04-24 22:15:50.937508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.937636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.937663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.893 qpair failed and we were unable to recover it. 00:24:08.893 [2024-04-24 22:15:50.937822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.937952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.937978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.893 qpair failed and we were unable to recover it. 00:24:08.893 [2024-04-24 22:15:50.938109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.938267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.938293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.893 qpair failed and we were unable to recover it. 00:24:08.893 [2024-04-24 22:15:50.938434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.938551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.938578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.893 qpair failed and we were unable to recover it. 00:24:08.893 [2024-04-24 22:15:50.938732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.938859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.938886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.893 qpair failed and we were unable to recover it. 00:24:08.893 [2024-04-24 22:15:50.939036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.939170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.939197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.893 qpair failed and we were unable to recover it. 00:24:08.893 [2024-04-24 22:15:50.939349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.939486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.939513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.893 qpair failed and we were unable to recover it. 00:24:08.893 [2024-04-24 22:15:50.939648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.939808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.939836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.893 qpair failed and we were unable to recover it. 00:24:08.893 [2024-04-24 22:15:50.939974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.940095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.940122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.893 qpair failed and we were unable to recover it. 00:24:08.893 [2024-04-24 22:15:50.940279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.940450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.940478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.893 qpair failed and we were unable to recover it. 00:24:08.893 [2024-04-24 22:15:50.940643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.940803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.940829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.893 qpair failed and we were unable to recover it. 00:24:08.893 [2024-04-24 22:15:50.940963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.941089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.941116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.893 qpair failed and we were unable to recover it. 00:24:08.893 [2024-04-24 22:15:50.941277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.941412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.941440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.893 qpair failed and we were unable to recover it. 00:24:08.893 [2024-04-24 22:15:50.941579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.941701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.941727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.893 qpair failed and we were unable to recover it. 00:24:08.893 [2024-04-24 22:15:50.941881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.942048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.942075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.893 qpair failed and we were unable to recover it. 00:24:08.893 [2024-04-24 22:15:50.942256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.942381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.942423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.893 qpair failed and we were unable to recover it. 00:24:08.893 [2024-04-24 22:15:50.942569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.942729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.942756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.893 qpair failed and we were unable to recover it. 00:24:08.893 [2024-04-24 22:15:50.942950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.943114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.943140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.893 qpair failed and we were unable to recover it. 00:24:08.893 [2024-04-24 22:15:50.943326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.943445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.943474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.893 qpair failed and we were unable to recover it. 00:24:08.893 [2024-04-24 22:15:50.943606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.943763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.943791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.893 qpair failed and we were unable to recover it. 00:24:08.893 [2024-04-24 22:15:50.943947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.944096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.944121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.893 qpair failed and we were unable to recover it. 00:24:08.893 [2024-04-24 22:15:50.944269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.944434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.944462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.893 qpair failed and we were unable to recover it. 00:24:08.893 [2024-04-24 22:15:50.944597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.944721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.944747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.893 qpair failed and we were unable to recover it. 00:24:08.893 [2024-04-24 22:15:50.944902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.945060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.945088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.893 qpair failed and we were unable to recover it. 00:24:08.893 [2024-04-24 22:15:50.945223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.945375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.945410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.893 qpair failed and we were unable to recover it. 00:24:08.893 [2024-04-24 22:15:50.945539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.945671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.945697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.893 qpair failed and we were unable to recover it. 00:24:08.893 [2024-04-24 22:15:50.945824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.945981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.893 [2024-04-24 22:15:50.946007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.894 qpair failed and we were unable to recover it. 00:24:08.894 [2024-04-24 22:15:50.946164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.946301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.946327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.894 qpair failed and we were unable to recover it. 00:24:08.894 [2024-04-24 22:15:50.946449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.946591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.946617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.894 qpair failed and we were unable to recover it. 00:24:08.894 [2024-04-24 22:15:50.946752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.946919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.946946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.894 qpair failed and we were unable to recover it. 00:24:08.894 [2024-04-24 22:15:50.947131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.947294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.947320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.894 qpair failed and we were unable to recover it. 00:24:08.894 [2024-04-24 22:15:50.947473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.947610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.947637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.894 qpair failed and we were unable to recover it. 00:24:08.894 [2024-04-24 22:15:50.947821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.947972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.947999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.894 qpair failed and we were unable to recover it. 00:24:08.894 [2024-04-24 22:15:50.948135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.948315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.948342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.894 qpair failed and we were unable to recover it. 00:24:08.894 [2024-04-24 22:15:50.948515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.948661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.948687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.894 qpair failed and we were unable to recover it. 00:24:08.894 [2024-04-24 22:15:50.948846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.948997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.949025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.894 qpair failed and we were unable to recover it. 00:24:08.894 [2024-04-24 22:15:50.949151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.949302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.949329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.894 qpair failed and we were unable to recover it. 00:24:08.894 [2024-04-24 22:15:50.949486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.949622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.949664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.894 qpair failed and we were unable to recover it. 00:24:08.894 [2024-04-24 22:15:50.949842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.950012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.950039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.894 qpair failed and we were unable to recover it. 00:24:08.894 [2024-04-24 22:15:50.950185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.950385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.950421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.894 qpair failed and we were unable to recover it. 00:24:08.894 [2024-04-24 22:15:50.950549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.950677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.950703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.894 qpair failed and we were unable to recover it. 00:24:08.894 [2024-04-24 22:15:50.950857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.951005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.951032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.894 qpair failed and we were unable to recover it. 00:24:08.894 [2024-04-24 22:15:50.951202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.951352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.951378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.894 qpair failed and we were unable to recover it. 00:24:08.894 [2024-04-24 22:15:50.951554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.951692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.951723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.894 qpair failed and we were unable to recover it. 00:24:08.894 [2024-04-24 22:15:50.951887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.952090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.952124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.894 qpair failed and we were unable to recover it. 00:24:08.894 [2024-04-24 22:15:50.952332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.952487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.952516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.894 qpair failed and we were unable to recover it. 00:24:08.894 [2024-04-24 22:15:50.952646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.952823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.952851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.894 qpair failed and we were unable to recover it. 00:24:08.894 [2024-04-24 22:15:50.953026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.953223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.953254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.894 qpair failed and we were unable to recover it. 00:24:08.894 [2024-04-24 22:15:50.953386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.953565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.953593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.894 qpair failed and we were unable to recover it. 00:24:08.894 [2024-04-24 22:15:50.953791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.953989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.954016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.894 qpair failed and we were unable to recover it. 00:24:08.894 [2024-04-24 22:15:50.954237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.954407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.954445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.894 qpair failed and we were unable to recover it. 00:24:08.894 [2024-04-24 22:15:50.954604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.954742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.954769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.894 qpair failed and we were unable to recover it. 00:24:08.894 [2024-04-24 22:15:50.955036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.955170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.894 [2024-04-24 22:15:50.955196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.894 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.955417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.955592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.955618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.955806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.956008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.956035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.956173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.956320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.956347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.956512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.956650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.956677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.956861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.957025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.957057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.957217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.957407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.957440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.957573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.957802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.957829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.958002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.958166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.958192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.958326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.958498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.958525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.958661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.958830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.958856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.959054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.959252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.959279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.959435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.959575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.959601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.959809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.959991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.960018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.960214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.960356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.960382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.960566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.960812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.960844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.960975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.961188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.961215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.961378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.961527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.961553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.961750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.961947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.961974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.962116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.962300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.962327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.962510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.962676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.962702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.962863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.962989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.963016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.963205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.963363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.963388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.963558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.963744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.963770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.963946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.964125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.964152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.964321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.964467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.964494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.964628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.964781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.964808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.965074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.965268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.965294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.965481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.965616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.965654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.965893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.966066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.966093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.966362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.966540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.966567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.966791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.966974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.895 [2024-04-24 22:15:50.967001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.895 qpair failed and we were unable to recover it. 00:24:08.895 [2024-04-24 22:15:50.967217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.967435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.967463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.967590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.967799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.967825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.968043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.968225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.968252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.968420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.968573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.968599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.968788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.968993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.969020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.969215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.969381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.969416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.969600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.969777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.969805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.970013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.970138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.970165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.970339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.970575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.970604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.970860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.971044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.971071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.971281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.971459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.971487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.971657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.971818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.971845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.972016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.972233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.972260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.972451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.972643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.972670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.972838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.973045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.973072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.973325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.973469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.973497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.973661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.973830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.973857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.974055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.974226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.974253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.974440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.974618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.974646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.974853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.975026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.975053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.975196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.975328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.975355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.975509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.975674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.975700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.975898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.976067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.976093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.976252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.976420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.976448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.976664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.976844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.976872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.977081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.977259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.977286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.977492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.977698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.977725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.977902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.978077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.978104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.978334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.978497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.978525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.978698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.978917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.978944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.979110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.979308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.979335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.979541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.979748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.979775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.896 qpair failed and we were unable to recover it. 00:24:08.896 [2024-04-24 22:15:50.979974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.896 [2024-04-24 22:15:50.980172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.980199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.980371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.980574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.980613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.980892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.981078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.981106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.981290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.981465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.981493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.981695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.981869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.981896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.982095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.982289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.982315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.982518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.982702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.982729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.982926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.983060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.983086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.983287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.983454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.983482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.983682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.983851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.983877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.984049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.984274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.984301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.984474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.984672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.984699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.984880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.985050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.985085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.985293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.985487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.985516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.985727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.985856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.985882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.986084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.986269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.986296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.986485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.986656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.986682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.986894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.987019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.987047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.987205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.987376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.987421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.987559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.987753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.987779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.987972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.988175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.988202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.988346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.988564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.988592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.988776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.988997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.989025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.989221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.989422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.989450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.989637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.989822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.989849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.990020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.990170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.990197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.990383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.990519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.990545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.990701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.990870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.990897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.991093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.991265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.991291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.991495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.991670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.991697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.991869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.992072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.992099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.992293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.992449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.992477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.992658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.992855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.992882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.897 qpair failed and we were unable to recover it. 00:24:08.897 [2024-04-24 22:15:50.993098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.897 [2024-04-24 22:15:50.993320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.993347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.898 qpair failed and we were unable to recover it. 00:24:08.898 [2024-04-24 22:15:50.993555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.993753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.993780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.898 qpair failed and we were unable to recover it. 00:24:08.898 [2024-04-24 22:15:50.993985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.994194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.994228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.898 qpair failed and we were unable to recover it. 00:24:08.898 [2024-04-24 22:15:50.994498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.994626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.994653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.898 qpair failed and we were unable to recover it. 00:24:08.898 [2024-04-24 22:15:50.994896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.995112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.995139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.898 qpair failed and we were unable to recover it. 00:24:08.898 [2024-04-24 22:15:50.995341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.995520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.995547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.898 qpair failed and we were unable to recover it. 00:24:08.898 [2024-04-24 22:15:50.995702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.995914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.995940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.898 qpair failed and we were unable to recover it. 00:24:08.898 [2024-04-24 22:15:50.996138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.996308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.996334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.898 qpair failed and we were unable to recover it. 00:24:08.898 [2024-04-24 22:15:50.996531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.996668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.996695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.898 qpair failed and we were unable to recover it. 00:24:08.898 [2024-04-24 22:15:50.996893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.997103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.997130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.898 qpair failed and we were unable to recover it. 00:24:08.898 [2024-04-24 22:15:50.997325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.997539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.997567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.898 qpair failed and we were unable to recover it. 00:24:08.898 [2024-04-24 22:15:50.997724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.997943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.997969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.898 qpair failed and we were unable to recover it. 00:24:08.898 [2024-04-24 22:15:50.998150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.998303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.998330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.898 qpair failed and we were unable to recover it. 00:24:08.898 [2024-04-24 22:15:50.998487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.998650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.998677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.898 qpair failed and we were unable to recover it. 00:24:08.898 [2024-04-24 22:15:50.998867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.999034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.999060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.898 qpair failed and we were unable to recover it. 00:24:08.898 [2024-04-24 22:15:50.999265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.999431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.999459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.898 qpair failed and we were unable to recover it. 00:24:08.898 [2024-04-24 22:15:50.999660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.999848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:50.999874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.898 qpair failed and we were unable to recover it. 00:24:08.898 [2024-04-24 22:15:51.000036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:51.000210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:51.000237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.898 qpair failed and we were unable to recover it. 00:24:08.898 [2024-04-24 22:15:51.000437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:51.000611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:51.000637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.898 qpair failed and we were unable to recover it. 00:24:08.898 [2024-04-24 22:15:51.000803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:51.000974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:51.001000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.898 qpair failed and we were unable to recover it. 00:24:08.898 [2024-04-24 22:15:51.001174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:51.001370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:51.001404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.898 qpair failed and we were unable to recover it. 00:24:08.898 [2024-04-24 22:15:51.001659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:51.001833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:51.001860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.898 qpair failed and we were unable to recover it. 00:24:08.898 [2024-04-24 22:15:51.002030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:51.002197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:51.002223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.898 qpair failed and we were unable to recover it. 00:24:08.898 [2024-04-24 22:15:51.002354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:51.002536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:51.002564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.898 qpair failed and we were unable to recover it. 00:24:08.898 [2024-04-24 22:15:51.002735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:51.002938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:51.002965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.898 qpair failed and we were unable to recover it. 00:24:08.898 [2024-04-24 22:15:51.003173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:51.003349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:51.003376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.898 qpair failed and we were unable to recover it. 00:24:08.898 [2024-04-24 22:15:51.003587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:51.003744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:51.003770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.898 qpair failed and we were unable to recover it. 00:24:08.898 [2024-04-24 22:15:51.003938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:51.004103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:51.004130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.898 qpair failed and we were unable to recover it. 00:24:08.898 [2024-04-24 22:15:51.004306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:51.004536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:51.004564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.898 qpair failed and we were unable to recover it. 00:24:08.898 [2024-04-24 22:15:51.004702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.898 [2024-04-24 22:15:51.004902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.004929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.005102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.005341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.005368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.005573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.005751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.005778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.005967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.006136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.006163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.006363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.006593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.006621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.006822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.007022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.007048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.007229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.007418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.007447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.007618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.007826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.007853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.008023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.008229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.008255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.008463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.008639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.008666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.008842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.009026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.009053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.009219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.009407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.009435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.009626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.009806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.009832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.009979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.010186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.010213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.010434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.010604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.010630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.010842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.011017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.011044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.011230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.011445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.011473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.011686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.011843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.011869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.012068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.012213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.012240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.012406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.012578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.012605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.012802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.012959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.012990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.013173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.013358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.013386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.013601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.013757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.013784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.013980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.014149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.014176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.014372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.014557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.014585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.014782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.014953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.014980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.015170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.015357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.015383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.015535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.015723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.015750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.015918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.016082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.016109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.016310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.016455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.016483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.016649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.016855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.016887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.017078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.017259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.017286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.017444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.017633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.017660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.899 [2024-04-24 22:15:51.017867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.018032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.899 [2024-04-24 22:15:51.018058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.899 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.018252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.018447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.018475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.018665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.018874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.018901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.019082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.019247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.019275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.019478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.019650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.019677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.019850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.020022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.020050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.020255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.020428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.020456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.020666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.020802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.020834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.021044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.021211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.021238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.021408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.021583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.021610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.021770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.021913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.021940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.022112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.022286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.022313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.022555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.022714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.022741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.022937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.023103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.023130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.023305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.023489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.023517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.023727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.023930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.023957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.024159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.024332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.024359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.024537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.024673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.024705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.024865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.025035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.025062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.025261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.025457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.025486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.025621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.025802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.025829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.026011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.026198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.026225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.026366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.026569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.026596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.026807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.026938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.026965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.027153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.027334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.027360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.027559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.027763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.027790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.028008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.028175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.028202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.028404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.028622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.028649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.028834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.029033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.029060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.029257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.029425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.029452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.029660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.029887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.029914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.030119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.030333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.030360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.030555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.030730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.030757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.030957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.031119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.031147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.900 qpair failed and we were unable to recover it. 00:24:08.900 [2024-04-24 22:15:51.031312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.900 [2024-04-24 22:15:51.031476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.031504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.031719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.031894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.031920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.032068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.032241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.032268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.032479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.032633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.032659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.032851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.033022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.033049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.033178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.033342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.033369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.033578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.033744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.033771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.033987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.034192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.034220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.034409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.034596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.034623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.034780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.034912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.034938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.035112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.035299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.035326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.035496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.035706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.035734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.035950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.036120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.036147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.036334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.036519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.036547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.036718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.036914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.036942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.037138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.037345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.037372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.037539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.037731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.037758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.037969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.038096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.038123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.038315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.038515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.038543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.038714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.038927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.038953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.039131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.039333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.039361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.039567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.039761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.039788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.039959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.040159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.040186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.040362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.040564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.040592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.040804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.040994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.041021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.041221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.041423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.041451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.041594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.041757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.041783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.041958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.042154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.042181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.042370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.042551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.042579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.042778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.042951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.042978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.043151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.043319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.043346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.043556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.043778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.043805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.043967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.044150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.044176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.044377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.044561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.044588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.044775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.044909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.044936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.901 [2024-04-24 22:15:51.045150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.045352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.901 [2024-04-24 22:15:51.045378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.901 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.045554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.045765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.045793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.046014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.046212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.046240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.046440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.046611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.046638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.046836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.046975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.047003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.047210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.047409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.047437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.047651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.047847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.047874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.048010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.048209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.048235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.048435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.048606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.048633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.048840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.048999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.049026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.049223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.049436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.049464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.049704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.049867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.049894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.050109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.050293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.050319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.050494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.050665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.050692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.050903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.051068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.051095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.051378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.051555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.051582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.051796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.051997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.052024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.052232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.052401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.052429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.052600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.052798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.052825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.053039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.053185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.053212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.053427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.053645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.053672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.053892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.054096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.054124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.054332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.054531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.054559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.054739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.054909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.054936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.055110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.055322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.055348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.055560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.055740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.055767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.055934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.056132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.056160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.056346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.056540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.056568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.056757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.056944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.056972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.057165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.057357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.057384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.057561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.057742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.057769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.902 [2024-04-24 22:15:51.057980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.058142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.902 [2024-04-24 22:15:51.058169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.902 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.058376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.058516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.058543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.058696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.058897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.058923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.059235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.059498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.059536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.059704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.059902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.059939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.060108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.060353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.060380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.060587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.060736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.060767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.061014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.061189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.061216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.061341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.061529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.061557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.061791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.061983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.062009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.062250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.062484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.062511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.062696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.062887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.062915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.063153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.063350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.063377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.063586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.063864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.063891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.064085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.064258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.064284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.064546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.064711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.064738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.064944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.065114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.065141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.065335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.065512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.065539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.065714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.065880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.065908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.066111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.066296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.066322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.066490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.066617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.066643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.066845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.067058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.067086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.067314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.067495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.067523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.067732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.067911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.067937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.068148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.068352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.068379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.068561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.068744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.068771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.068944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.069108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.069135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.069362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.069543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.069571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.069733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.069932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.069959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.070171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.070385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.070419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.070630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.070800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.070827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.070980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.071154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.071181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.071345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.071531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.071559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.071746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.071896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.071923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.072096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.072303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.072329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.072509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.072664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.072690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.072873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.073139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.073166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.073366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.073576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.073603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.073774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.073987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.903 [2024-04-24 22:15:51.074014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.903 qpair failed and we were unable to recover it. 00:24:08.903 [2024-04-24 22:15:51.074202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.074350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.074377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.074561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.074760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.074786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.074994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.075278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.075305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.075477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.075637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.075664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.075875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.076092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.076119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.076310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.076484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.076511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.076724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.076930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.076958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.077096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.077302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.077328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.077484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.077690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.077717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.077915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.078085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.078124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.078325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.078483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.078511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.078680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.078884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.078909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.079109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.079291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.079318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.079523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.079682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.079708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.079922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.080101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.080128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.080336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.080538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.080566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.080805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.080979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.081005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.081211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.081404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.081432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.081628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.081804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.081831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.082030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.082244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.082275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.082478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.082668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.082695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.082897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.083070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.083097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.083278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.083485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.083513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.083714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.083915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.083941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.084164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.084335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.084362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.084542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.084726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.084752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.084916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.085122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.085149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.085316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.085499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.085526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.085708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.085923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.085950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.086088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.086286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.086318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.086484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.086618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.086645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.086853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.087038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.087064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.087236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.087374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.087408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.087572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.087786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.087812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.087977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.088178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.088204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.088368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.088554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.088581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.088757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.088953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.088981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.089153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.089321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.089347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.904 qpair failed and we were unable to recover it. 00:24:08.904 [2024-04-24 22:15:51.089502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.904 [2024-04-24 22:15:51.089676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.089706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.089924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.090080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.090107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.090284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.090459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.090488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.090675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.090861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.090887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.091050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.091217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.091245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.091389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.091559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.091587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.091770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.091943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.091969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.092111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.092321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.092348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.092539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.092738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.092765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.092952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.093139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.093166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.093353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.093557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.093585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.093776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.093911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.093937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.094115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.094281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.094307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.094470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.094680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.094707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.094919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.095094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.095121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.095293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.095455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.095482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.095729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.095874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.095901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.096033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.096214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.096240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.096401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.096575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.096602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.096785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.096939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.096967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.097122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.097309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.097335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.097526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.097687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.097714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.097880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.098043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.098069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.098270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.098442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.098471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.098629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.098757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.098783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.098963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.099129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.099155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.099331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.099513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.099542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.099710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.099857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.099884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.100025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.100160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.100187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.100382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.100543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.100571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.100735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.100926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.100952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.101084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.101260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.101288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.101455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.101652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.101679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.101878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.102050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.102077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.102293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.102462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.102489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.102654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.102851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.102877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.103051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.103226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.103252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.103386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.103574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.103601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.905 qpair failed and we were unable to recover it. 00:24:08.905 [2024-04-24 22:15:51.103808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.905 [2024-04-24 22:15:51.103966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.103993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.906 qpair failed and we were unable to recover it. 00:24:08.906 [2024-04-24 22:15:51.104201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.104368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.104414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.906 qpair failed and we were unable to recover it. 00:24:08.906 [2024-04-24 22:15:51.104548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.104721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.104748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.906 qpair failed and we were unable to recover it. 00:24:08.906 [2024-04-24 22:15:51.104924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.105094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.105121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.906 qpair failed and we were unable to recover it. 00:24:08.906 [2024-04-24 22:15:51.105327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.105517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.105545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.906 qpair failed and we were unable to recover it. 00:24:08.906 [2024-04-24 22:15:51.105727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.105857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.105884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.906 qpair failed and we were unable to recover it. 00:24:08.906 [2024-04-24 22:15:51.106050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.106217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.106244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.906 qpair failed and we were unable to recover it. 00:24:08.906 [2024-04-24 22:15:51.106446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.106616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.106643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.906 qpair failed and we were unable to recover it. 00:24:08.906 [2024-04-24 22:15:51.106805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.106970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.106997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.906 qpair failed and we were unable to recover it. 00:24:08.906 [2024-04-24 22:15:51.107175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.107336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.107362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.906 qpair failed and we were unable to recover it. 00:24:08.906 [2024-04-24 22:15:51.107551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.107711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.107738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.906 qpair failed and we were unable to recover it. 00:24:08.906 [2024-04-24 22:15:51.107940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.108118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.108145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.906 qpair failed and we were unable to recover it. 00:24:08.906 [2024-04-24 22:15:51.108311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.108533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.108562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.906 qpair failed and we were unable to recover it. 00:24:08.906 [2024-04-24 22:15:51.108753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.108939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.108966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.906 qpair failed and we were unable to recover it. 00:24:08.906 [2024-04-24 22:15:51.109145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.109308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.109335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.906 qpair failed and we were unable to recover it. 00:24:08.906 [2024-04-24 22:15:51.109481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.109680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.109708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.906 qpair failed and we were unable to recover it. 00:24:08.906 [2024-04-24 22:15:51.109869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.110007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.110035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.906 qpair failed and we were unable to recover it. 00:24:08.906 [2024-04-24 22:15:51.110235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.110409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.110437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.906 qpair failed and we were unable to recover it. 00:24:08.906 [2024-04-24 22:15:51.110612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.110765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.110792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.906 qpair failed and we were unable to recover it. 00:24:08.906 [2024-04-24 22:15:51.110976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.111149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.111176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.906 qpair failed and we were unable to recover it. 00:24:08.906 [2024-04-24 22:15:51.111345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.111540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.111568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.906 qpair failed and we were unable to recover it. 00:24:08.906 [2024-04-24 22:15:51.111738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.111896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.111923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.906 qpair failed and we were unable to recover it. 00:24:08.906 [2024-04-24 22:15:51.112063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.112229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.112256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.906 qpair failed and we were unable to recover it. 00:24:08.906 [2024-04-24 22:15:51.112435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.112620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.112647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.906 qpair failed and we were unable to recover it. 00:24:08.906 [2024-04-24 22:15:51.112870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.113021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.113048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.906 qpair failed and we were unable to recover it. 00:24:08.906 [2024-04-24 22:15:51.113211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.113406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.113434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.906 qpair failed and we were unable to recover it. 00:24:08.906 [2024-04-24 22:15:51.113592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.113753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.113780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.906 qpair failed and we were unable to recover it. 00:24:08.906 [2024-04-24 22:15:51.113906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.114100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.114127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.906 qpair failed and we were unable to recover it. 00:24:08.906 [2024-04-24 22:15:51.114306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.114472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.906 [2024-04-24 22:15:51.114500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:08.906 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.114696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.114894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.114921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.115120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.115289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.115316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.115526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.115691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.115718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.115918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.116060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.116087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.116251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.116478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.116505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.116717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.116923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.116951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.117124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.117326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.117353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.117541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.117741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.117767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.117949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.118129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.118156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.118365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.118509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.118536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.118727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.118925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.118952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.119120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.119290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.119317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.119518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.119708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.119735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.119893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.120105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.120132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.120297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.120459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.120487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.120640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.120841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.120867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.121038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.121212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.121239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.121413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.121577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.121604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.121790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.121953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.121980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.122109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.122277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.122305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.122469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.122632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.122658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.122810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.122973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.122999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.123192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.123363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.123389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.123572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.123777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.123804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.124023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.124174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.124200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.124387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.124565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.124593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.124775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.124904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.124930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.125112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.125269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.125296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.125472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.125601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.125628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.125827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.125980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.126005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.126167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.126328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.126355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.126541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.126721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.126747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.126894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.127063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.127090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.127250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.127408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.127444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.127582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.127767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.127793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.127973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.128150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.128177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.128467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.128612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.128638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.128864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.129033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.129060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.129245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.129422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.129450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.129607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.129759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.129786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.129965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.130117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.130144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.130301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.130433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.130460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.130661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.130864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.130891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.131042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.131219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.131245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.131371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.131524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.131550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.131698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.131864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.131891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.132084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.132252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.132278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.132465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.132635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.132662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.132810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.132982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.133007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.133186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.133392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.133429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.133570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.133704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.133730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.133899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.134057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.181 [2024-04-24 22:15:51.134084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.181 qpair failed and we were unable to recover it. 00:24:09.181 [2024-04-24 22:15:51.134263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.134414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.134443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.134596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.134755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.134788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.134940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.135149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.135176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.135331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.135479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.135511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.135653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.135825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.135851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.136051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.136214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.136242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.136409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.136590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.136617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.136762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.136935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.136962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.137148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.137372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.137408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.137609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.137818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.137845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.138028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.138200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.138227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.138414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.138575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.138602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.138741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.138944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.138970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.139113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.139280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.139313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.139492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.139693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.139722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.139937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.140123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.140150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.140331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.140495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.140522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.140751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.140900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.140926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.141106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.141241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.141267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.141442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.141606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.141633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.141827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.142023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.142049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.142249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.142486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.142515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.142712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.142865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.142891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.143095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.143274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.143304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.143504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.143679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.143706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.143876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.144008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.144034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.144232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.144381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.144428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.144582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.144791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.144817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.144970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.145094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.145120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.145291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.145467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.145495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.145704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.145883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.145909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.146076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.146254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.146281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.146467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.146597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.146623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.146786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.146939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.146972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.147139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.147305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.147333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.147493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.147665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.147692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.147891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.148065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.148093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.148272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.148472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.148501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.148683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.148872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.148899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.149085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.149255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.149282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.149421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.149562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.149589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.149756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.149927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.149954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.150132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.150345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.150371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.150523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.150684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.150711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.150918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.151075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.151102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.151262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.151448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.151476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.151645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.151822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.151849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.151989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.152162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.152189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.152421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.152601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.152628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.152828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.153024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.153051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.153232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.153438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.153466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.153649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.153847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.153874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.154042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.154212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.154239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.154433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.154604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.182 [2024-04-24 22:15:51.154631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.182 qpair failed and we were unable to recover it. 00:24:09.182 [2024-04-24 22:15:51.154838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.154971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.154998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.155116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.155332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.155359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.155533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.155707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.155735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.155929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.156140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.156166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.156342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.156530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.156558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.156714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.156877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.156904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.157089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.157292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.157319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.157525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.157731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.157759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.157958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.158158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.158186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.158407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.158590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.158617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.158839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.159037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.159064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.159261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.159403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.159431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.159603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.159769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.159796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.159993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.160161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.160187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.160392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.160537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.160565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.160768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.160961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.160988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.161206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.161328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.161355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.161492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.161677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.161704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.161871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.162043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.162069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.162275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.162466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.162494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.162679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.162877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.162903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.163078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.163231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.163257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.163456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.163645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.163673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.163872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.164078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.164104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.164266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.164467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.164495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.164684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.164862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.164889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.165062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.165244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.165271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.165473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.165629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.165656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.165834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.166007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.166034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.166242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.166424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.166452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.166705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.166868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.166895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.167050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.167229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.167255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.167415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.167627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.167654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.167857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.168063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.168089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.168302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.168579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.168607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.168781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.168965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.168993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.169216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.169408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.169444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.169641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.169852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.169878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.170056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.170223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.170250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.170422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.170618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.170644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.170850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.171033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.171060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.171260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.171389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.171422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.171548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.171698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.171725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.171872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.172071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.172097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.172328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.172491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.172519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.172731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.172885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.172911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.173092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.173263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.173290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.173503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.173697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.183 [2024-04-24 22:15:51.173724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.183 qpair failed and we were unable to recover it. 00:24:09.183 [2024-04-24 22:15:51.173896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.174095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.174122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.174374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.174535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.174563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.174730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.174949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.174975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.175147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.175365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.175392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.175570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.175743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.175769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.175982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.176147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.176174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.176356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.176546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.176574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.176774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.176983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.177009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.177171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.177389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.177425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.177628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.177803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.177829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.178013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.178171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.178198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.178405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.178607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.178634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.178844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.179043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.179070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.179236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.179409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.179437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.179635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.179832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.179858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.180110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.180309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.180336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.180506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.180680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.180707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.180909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.181099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.181125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.181289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.181459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.181488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.181701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.181852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.181879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.182042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.182195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.182222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.182411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.182532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.182559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.182710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.182935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.182962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.183100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.183318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.183345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.183543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.183717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.183745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.183942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.184101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.184128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.184290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.184490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.184518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.184723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.184889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.184916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.185110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.185380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.185415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.185618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.185779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.185806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.186003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.186184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.186211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.186387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.186586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.186612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.186838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.187030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.187058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.187267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.187470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.187498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.187661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.187862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.187890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.188109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.188273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.188300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.188470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.188636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.188663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.188823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.188993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.189020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.189182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.189377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.189411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.189577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.189739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.189766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.189905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.190103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.190129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.190335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.190540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.190568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.190783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.190958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.190985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.191157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.191351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.191377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.191548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.191720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.191746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.191928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.192095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.192122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.192313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.192470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.192497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.192664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.192831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.192858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.193110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.193254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.193281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.193486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.193673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.193700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.193867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.194063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.194090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.194275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.194444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.194472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.194641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.194809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.194835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.194972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.195164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.195192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.184 [2024-04-24 22:15:51.195401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.195548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.184 [2024-04-24 22:15:51.195575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.184 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.195746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.195878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.195904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.196116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.196277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.196304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.196503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.196706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.196732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.196932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.197099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.197126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.197343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.197525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.197553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.197738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.197906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.197932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.198099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.198282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.198309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.198479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.198706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.198738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.198902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.199026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.199052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.199238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.199383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.199418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.199643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.199773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.199800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.200001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.200195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.200222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.200425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.200632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.200660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.200860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.201058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.201085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.201255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.201390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.201423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.201595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.201764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.201791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.201962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.202124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.202151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.202337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.202524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.202556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.202748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.202913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.202940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.203206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.203377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.203411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.203636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.203810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.203837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.204036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.204251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.204278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.204481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.204706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.204733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.204931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.205104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.205130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.205330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.205501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.205530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.205714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.205835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.205861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.206041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.206236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.206263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.206432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.206627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.206661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.206791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.206992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.207020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.207200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.207371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.207403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.207626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.207839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.207867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.208071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.208262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.208289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.208468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.208661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.208687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.208826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.209011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.209038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.209239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.209405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.209433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.209604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.209799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.209826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.210001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.210203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.210230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.210374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.210579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.210612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.210843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.210988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.211015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.211214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.211372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.211406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.211586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.211756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.211782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.211953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.212119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.212146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.212355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.212576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.212603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.212799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.213011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.213040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.213235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.213410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.213438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.213622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.213798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.213824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.214023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.214227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.214254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.214442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.214623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.214649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.214871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.215083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.215110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.185 [2024-04-24 22:15:51.215344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.215518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.185 [2024-04-24 22:15:51.215546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.185 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.215710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.215892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.215919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.216118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.216280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.216307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.216441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.216618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.216645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.216788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.216964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.216990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.217146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.217415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.217444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.217598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.217783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.217810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.218018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.218182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.218208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.218380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.218571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.218598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.218802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.218973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.219000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.219167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.219363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.219390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.219604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.219801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.219828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.219999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.220206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.220233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.220439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.220623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.220650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.220857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.221020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.221047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.221214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.221399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.221428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.221650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.221814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.221841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.222039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.222174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.222201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.222402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.222599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.222626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.222897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.223093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.223120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.223310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.223469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.223498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.223669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.223841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.223869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.224082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.224280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.224307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.224506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.224675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.224703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.224865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.225033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.225060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.225226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.225405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.225434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.225632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.225802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.225829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.226025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.226204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.226231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.226380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.226524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.226551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.226751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.226921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.226948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.227145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.227358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.227385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.227593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.227721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.227748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.227942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.228115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.228142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.228326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.228455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.228483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.228646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.228809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.228836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.229022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.229226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.229253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.229452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.229701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.229728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.229933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.230105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.230132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.230298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.230471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.230499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.230671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.230865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.230892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.231101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.231261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.231289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.231464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.231643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.231670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.231842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.232013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.232040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.232248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.232418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.232446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.232630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.232769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.232796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.232997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.233157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.233184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.233362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.233519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.233546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.233698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.233884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.233911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.234101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.234281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.234307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.234483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.234692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.234719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.234918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.235080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.235106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.235295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.235473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.235501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.235676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.235876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.235902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.236113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.236266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.236293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.236461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.236670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.236697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.236869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.237078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.186 [2024-04-24 22:15:51.237105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.186 qpair failed and we were unable to recover it. 00:24:09.186 [2024-04-24 22:15:51.237356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.237571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.237599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.237806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.238007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.238034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.238215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.238377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.238508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.238719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.238859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.238886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.239087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.239220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.239246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.239420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.239624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.239651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.239853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.239985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.240011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.240185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.240354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.240381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.240630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.240794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.240821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.241030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.241225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.241251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.241513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.241686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.241713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.241950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.242121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.242148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.242317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.242497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.242525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.242738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.242879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.242905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.243102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.243297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.243324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.243524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.243724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.243751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.243958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.244116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.244143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.244303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.244449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.244476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.244653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.244858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.244886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.245101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.245304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.245331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.245526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.245733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.245759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.245954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.246143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.246170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.246305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.246508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.246536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.246707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.246886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.246913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.247112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.247287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.247313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.247494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.247630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.247656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.247863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.248071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.248098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.248271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.248474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.248502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.248701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.248867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.248894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.249086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.249235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.249262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.249440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.249631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.249658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.249869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.250020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.250047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.250241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.250404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.250432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.250595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.250815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.250842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.251041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.251222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.251249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.251377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.251592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.251620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.251808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.251986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.252013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.252209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.252372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.252418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.252597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.252795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.252822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.252992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.253191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.253218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.253430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.253639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.253666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.253910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.254115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.254142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.254354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.254566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.254594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.254801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.255005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.255033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.255272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.255452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.255480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.255664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.255840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.255867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.256069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.256233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.256260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.256465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.256660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.256687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.256896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.257104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.257131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.257339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.257515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.257543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.257741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.257936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.257963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.258171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.258369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.258402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.258614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.258828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.258856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.187 qpair failed and we were unable to recover it. 00:24:09.187 [2024-04-24 22:15:51.259026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.187 [2024-04-24 22:15:51.259229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.259257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.259461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.259638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.259665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.259831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.260002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.260029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.260237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.260390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.260434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.260601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.260774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.260801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.260973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.261149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.261176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.261371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.261545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.261573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.261794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.261984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.262011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.262232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.262493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.262521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.262721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.262917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.262944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.263116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.263268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.263299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.263506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.263681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.263708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.263877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.264074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.264101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.264238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.264463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.264491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.264690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.264859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.264885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.265053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.265215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.265242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.265439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.265648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.265674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.265870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.266035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.266062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.266276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.266437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.266464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.266676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.266833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.266860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.267069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.267230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.267264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.267477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.267648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.267674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.267898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.268055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.268082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.268269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.268515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.268543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.268733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.268950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.268977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.269178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.269373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.269405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.269578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.269780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.269816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.270022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.270198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.270226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.270386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.270559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.270585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.270707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.270855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.270881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.271052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.271223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.271256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.271425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.271607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.271633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.271786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.271980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.272007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.272233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.272436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.272464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.272658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.272856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.272882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.273090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.273257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.273283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.273495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.273727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.273753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.273968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.274130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.274156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.274336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.274467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.274495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.274635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.274838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.274865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.275034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.275229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.275255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.275433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.275629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.275657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.275869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.276081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.276108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.276306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.276515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.276543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.276736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.276933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.276960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.277156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.277315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.277341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.277501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.277762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.277790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.277918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.278075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.278101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.278264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.278477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.278504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.188 qpair failed and we were unable to recover it. 00:24:09.188 [2024-04-24 22:15:51.278673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.188 [2024-04-24 22:15:51.278844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.278870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.279065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.279273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.279301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.279497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.279695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.279722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.279928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.280116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.280142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.280312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.280486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.280514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.280678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.280875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.280901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.281102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.281249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.281277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.281465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.281698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.281725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.281870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.282040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.282066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.282262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.282430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.282458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.282660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.282857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.282884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.283093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.283277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.283303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.283511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.283715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.283742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.283943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.284121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.284147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.284327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.284510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.284539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.284728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.284932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.284958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.285164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.285361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.285387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.285591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.285808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.285835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.286023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.286216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.286243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.286451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.286621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.286647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.286857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.287046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.287073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.287288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.287489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.287517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.287675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.287879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.287906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.288084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.288291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.288317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.288456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.288617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.288643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.288784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.288954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.288981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.289174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.289342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.289367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.289569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.289743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.289770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.289974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.290183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.290210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.290371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.290584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.290614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.290773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.290959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.290985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.291146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.291297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.291324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.291487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.291710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.291736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.291947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.292116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.292143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.292314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.292512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.292539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.292731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.292897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.292924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.293104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.293274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.293301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.293437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.293614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.293640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.293793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.293953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.293981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.294147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.294308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.294335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.294493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.294656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.294683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.294893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.295089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.295116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.295318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.295528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.295557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.295684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.295850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.295877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.296059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.296268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.296295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.296490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.296671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.296698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.296835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.297006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.297033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.297230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.297390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.297425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.297584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.297799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.297825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.298028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.298226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.298253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.298431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.298632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.298659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.298831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.299018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.299045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.299310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.299483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.299511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.299669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.299842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.299869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.300041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.300237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.300264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.189 qpair failed and we were unable to recover it. 00:24:09.189 [2024-04-24 22:15:51.300418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.189 [2024-04-24 22:15:51.300614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.300641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.300844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.301011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.301039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.301248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.301424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.301452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.301628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.301792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.301819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.301990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.302175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.302201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.302377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.302587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.302615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.302872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.303079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.303106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.303318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.303528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.303556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.303727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.303895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.303922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.304083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.304295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.304322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.304512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.304716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.304743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.304922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.305123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.305150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.305321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.305519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.305547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.305722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.305909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.305936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.306140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.306344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.306370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.306562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.306759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.306786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.306957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.307094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.307121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.307324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.307512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.307540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.307726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.307892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.307919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.308135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.308292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.308318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.308492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.308698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.308725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.308913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.309121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.309146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.309335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.309536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.309565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.309765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.309967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.309993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.310176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.310381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.310414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.310604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.310835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.310862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.311039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.311199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.311225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.311422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.311632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.311660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.311863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.312038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.312064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.312268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.312461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.312489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.312744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.312949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.312975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.313137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.313323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.313350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.313522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.313703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.313731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.313945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.314155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.314182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.314416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.314618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.314645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.314857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.315052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.315078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.315340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.315513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.315541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.315716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.315894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.315921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.316055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.316262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.316289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.316491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.316700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.316727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.316889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.317054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.317081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.317283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.317413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.317439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.317620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.317838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.317865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.317995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.318119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.318146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.318306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.318440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.318467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.318665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.318862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.318889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.319059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.319264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.319290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.319447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.319629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.319656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.319841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.320031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.320058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.320217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.320385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.320424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.190 qpair failed and we were unable to recover it. 00:24:09.190 [2024-04-24 22:15:51.320599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.320796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.190 [2024-04-24 22:15:51.320823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.321042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.321249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.321276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.321472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.321659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.321686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.321868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.322067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.322094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.322249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.322415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.322442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.322640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.322821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.322848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.323046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.323226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.323253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.323459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.323628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.323660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.323831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.324030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.324057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.324271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.324480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.324508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.324682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.324832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.324858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.325019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.325231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.325258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.325445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.325643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.325670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.325869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.326070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.326097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.326274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.326451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.326478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.326676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.326878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.326905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.327101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.327270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.327297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.327488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.327679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.327714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.327927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.328132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.328159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.328378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.328523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.328551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.328731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.328933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.328960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.329134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.329304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.329331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.329537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.329703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.329729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.329927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.330099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.330126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.330296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.330459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.330487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.330684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.330855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.330882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.331054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.331222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.331249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.331460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.331674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.331706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.331908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.332080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.332107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.332269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.332437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.332465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.332651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.332879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.332906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.333064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.333232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.333259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.333471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.333666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.333692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.333902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.334070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.334096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.334283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.334450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.334478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.334684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.334862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.334890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.335085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.335293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.335320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.335492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.335696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.335728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.335926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.336121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.336149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.336333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.336498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.336526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.336692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.336885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.336913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.337110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.337251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.337278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.337475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.337641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.337667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.337865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.338046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.338073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.338247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.338406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.338433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.338621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.338765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.338792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.339002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.339182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.339208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.339383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.339578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.339605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.339789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.339953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.339980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.340161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.340358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.340385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.340537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.340712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.340739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.340946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.341143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.341170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.341344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.341539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.341567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.341786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.341964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.341991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.342154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.342353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.342379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.191 [2024-04-24 22:15:51.342567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.342699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.191 [2024-04-24 22:15:51.342726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.191 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.342928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.343095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.343121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.343284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.343482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.343511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.343711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.343877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.343903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.344045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.344179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.344205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.344410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.344578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.344606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.344814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.344973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.345000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.345147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.345294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.345321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.345497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.345689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.345716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.345912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.346052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.346079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.346274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.346467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.346495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.346655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.346801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.346828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.347019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.347211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.347237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.347450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.347616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.347644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.347803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.347999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.348026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.348202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.348353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.348380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.348556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.348684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.348711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.348886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.349040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.349067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.349245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.349423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.349451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.349655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.349869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.349897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.350072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.350256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.350283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.350462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.350659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.350686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.350875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.351074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.351100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.351275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.351440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.351469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.351678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.351909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.351942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.352138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.352337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.352363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.352503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.352644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.352671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.352841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.353010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.353037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.353212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.353353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.353379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.353590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.353771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.353799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.353936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.354151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.354176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.354301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.354463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.354491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.354624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.354821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.354848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.355102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.355269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.355296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.355447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.355650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.355677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.355880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.356027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.356054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.356248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.356418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.356445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.356613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.356814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.356841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.357015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.357167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.357193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.357362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.357569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.357597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.357790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.357960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.357986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.358155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.358286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.358312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.358490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.358652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.358679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.358861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.359004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.359030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.359164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.359295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.359321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.359503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.359673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.359700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.359872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.360077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.360103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.360291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.360451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.360480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.360648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.360816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.360842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.361025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.361194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.361221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.361408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.361568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.361594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.361723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.361894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.361920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.362114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.362276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.362303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.362484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.362645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.362672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.362838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.362980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.363007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.363214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.363428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.363456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.192 qpair failed and we were unable to recover it. 00:24:09.192 [2024-04-24 22:15:51.363637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.192 [2024-04-24 22:15:51.363763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.363791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.363951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.364173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.364200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.364401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.364587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.364614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.364788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.364952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.364978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.365192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.365455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.365483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.365690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.365854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.365881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.366056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.366224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.366250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.366428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.366613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.366640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.366802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.366998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.367025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.367197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.367407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.367435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.367603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.367801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.367827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.367965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.368132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.368159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.368288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.368438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.368466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.368641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.368778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.368805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.368964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.369182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.369208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.369422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.369653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.369681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.369869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.370042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.370069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.370292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.370494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.370523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.370699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.370841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.370869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.371041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.371215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.371241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.371414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.371639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.371667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.371843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.372048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.372075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.372252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.372448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.372476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.372673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.372873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.372900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.373072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.373264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.373291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.373470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.373643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.373670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.373839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.374056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.374083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.374261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.374438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.374467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.374668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.374836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.374863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.375057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.375207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.375233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.375400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.375562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.375588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.375729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.375992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.376019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.376202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.376422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.376451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.376638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.376836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.376862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.377012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.377214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.377242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.377474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.377618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.377644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.377862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.378019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.378045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.378217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.378399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.378427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.378627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.378770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.378801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.378964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.379133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.379159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.379328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.379567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.379596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.379798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.379988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.380015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.380175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.380348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.380375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.380579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.380763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.380790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.380976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.381158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.381183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.381312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.381475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.381503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.381709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.381929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.381956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.193 [2024-04-24 22:15:51.382119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.382371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.193 [2024-04-24 22:15:51.382406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.193 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.382583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.382751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.382777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.382939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.383162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.383189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.383386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.383588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.383616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.383831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.383985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.384011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.384247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.384438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.384466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.384668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.384819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.384845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.385017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.385212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.385239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.385436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.385607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.385635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.385827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.386024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.386051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.386258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.386432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.386467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.386639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.386832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.386859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.387058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.387269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.387296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.387456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.387660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.387695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.387900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.388044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.388071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.388258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.388436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.388464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.388675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.388854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.388881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.389057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.389215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.389245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.389453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.389607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.389634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.389806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.389953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.389979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.390148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.390314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.390346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.390562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.390707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.390734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.390889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.391083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.391110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.391256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.391443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.391471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.391638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.391807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.391834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.392030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.392170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.392197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.392368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.392549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.392577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.392738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.392876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.392904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.393080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.393225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.393252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.393460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.393631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.393658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.393844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.393999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.394031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.394197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.394387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.394423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.394624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.394799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.394826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.395018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.395185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.395212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.395422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.395603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.395630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.395833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.395976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.396003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.396179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.396348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.396375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.396582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.396730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.396757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.396916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.397117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.397144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.397318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.397492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.397520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.397710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.397923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.397956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.398116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.398312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.398340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.398527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.398668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.398695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.398854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.399036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.399063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.399234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.399432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.399460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.399605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.399802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.399829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.400047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.400211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.400238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.400437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.400614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.400642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.400825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.401029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.401056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.401235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.401442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.401470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.401673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.401849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.401876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.402071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.402260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.402287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.402433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.402630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.402657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.402877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.403072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.403099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.403293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.403458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.403485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.403697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.403875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.403902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.194 qpair failed and we were unable to recover it. 00:24:09.194 [2024-04-24 22:15:51.404085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.194 [2024-04-24 22:15:51.404245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.404272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.404458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.404612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.404639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.404760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.404955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.404986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.405190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.405348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.405375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.405557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.405740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.405767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.405947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.406143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.406169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.406339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.406546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.406581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.406729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.406909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.406936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.407136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.407292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.407320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.407471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.407609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.407636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.407829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.408005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.408032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.408201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.408403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.408431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.408615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.408760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.408787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.408993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.409154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.409181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.409349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.409557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.409585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.409795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.409988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.410015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.410155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.410284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.410311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.410519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.410673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.410701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.410855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.411044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.411071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.411245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.411448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.411476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.411669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.411827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.411854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.412009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.412187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.412213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.412390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.412540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.412567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.412769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.412953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.412980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.413145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.413313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.413340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.413526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.413706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.413734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.413930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.414103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.414129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.414299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.414484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.414513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.414653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.414827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.414854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.415028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.415210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.415237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.415413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.415613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.415640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.415844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.415978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.416005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.416142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.416348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.416375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.416604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.416763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.416790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.416965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.417127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.417154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.417311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.417482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.417510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.417710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.417895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.417922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.418130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.418337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.418364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.418588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.418792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.418819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.418982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.419152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.419178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.419352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.419512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.419540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.419665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.419827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.419854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.195 qpair failed and we were unable to recover it. 00:24:09.195 [2024-04-24 22:15:51.420054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.195 [2024-04-24 22:15:51.420195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.468 [2024-04-24 22:15:51.420222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.468 qpair failed and we were unable to recover it. 00:24:09.468 [2024-04-24 22:15:51.420389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.468 [2024-04-24 22:15:51.420611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.468 [2024-04-24 22:15:51.420638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.468 qpair failed and we were unable to recover it. 00:24:09.468 [2024-04-24 22:15:51.420766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.468 [2024-04-24 22:15:51.420927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.468 [2024-04-24 22:15:51.420954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.468 qpair failed and we were unable to recover it. 00:24:09.468 [2024-04-24 22:15:51.421167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.468 [2024-04-24 22:15:51.421291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.468 [2024-04-24 22:15:51.421318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.468 qpair failed and we were unable to recover it. 00:24:09.468 [2024-04-24 22:15:51.421463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.468 [2024-04-24 22:15:51.421623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.468 [2024-04-24 22:15:51.421653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.468 qpair failed and we were unable to recover it. 00:24:09.468 [2024-04-24 22:15:51.421835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.468 [2024-04-24 22:15:51.422012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.468 [2024-04-24 22:15:51.422039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.468 qpair failed and we were unable to recover it. 00:24:09.468 [2024-04-24 22:15:51.422214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.468 [2024-04-24 22:15:51.422374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.468 [2024-04-24 22:15:51.422410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.468 qpair failed and we were unable to recover it. 00:24:09.468 [2024-04-24 22:15:51.422623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.468 [2024-04-24 22:15:51.422829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.468 [2024-04-24 22:15:51.422856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.468 qpair failed and we were unable to recover it. 00:24:09.468 [2024-04-24 22:15:51.423022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.468 [2024-04-24 22:15:51.423223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.468 [2024-04-24 22:15:51.423250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.468 qpair failed and we were unable to recover it. 00:24:09.468 [2024-04-24 22:15:51.423428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.468 [2024-04-24 22:15:51.423603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.468 [2024-04-24 22:15:51.423630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.468 qpair failed and we were unable to recover it. 00:24:09.468 [2024-04-24 22:15:51.423792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.468 [2024-04-24 22:15:51.423963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.468 [2024-04-24 22:15:51.423990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.468 qpair failed and we were unable to recover it. 00:24:09.468 [2024-04-24 22:15:51.424143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.468 [2024-04-24 22:15:51.424277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.468 [2024-04-24 22:15:51.424303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.468 qpair failed and we were unable to recover it. 00:24:09.468 [2024-04-24 22:15:51.424510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.468 [2024-04-24 22:15:51.424713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.468 [2024-04-24 22:15:51.424739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.469 qpair failed and we were unable to recover it. 00:24:09.469 [2024-04-24 22:15:51.424870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.425052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.425079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.469 qpair failed and we were unable to recover it. 00:24:09.469 [2024-04-24 22:15:51.425262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.425465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.425494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.469 qpair failed and we were unable to recover it. 00:24:09.469 [2024-04-24 22:15:51.425727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.425886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.425913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.469 qpair failed and we were unable to recover it. 00:24:09.469 [2024-04-24 22:15:51.426119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.426291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.426317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.469 qpair failed and we were unable to recover it. 00:24:09.469 [2024-04-24 22:15:51.426500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.426646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.426672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.469 qpair failed and we were unable to recover it. 00:24:09.469 [2024-04-24 22:15:51.426844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.427056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.427082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.469 qpair failed and we were unable to recover it. 00:24:09.469 [2024-04-24 22:15:51.427288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.427493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.427521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.469 qpair failed and we were unable to recover it. 00:24:09.469 [2024-04-24 22:15:51.427796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.427996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.428022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.469 qpair failed and we were unable to recover it. 00:24:09.469 [2024-04-24 22:15:51.428196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.428390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.428424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.469 qpair failed and we were unable to recover it. 00:24:09.469 [2024-04-24 22:15:51.428597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.428798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.428826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.469 qpair failed and we were unable to recover it. 00:24:09.469 [2024-04-24 22:15:51.428968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.429162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.429189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.469 qpair failed and we were unable to recover it. 00:24:09.469 [2024-04-24 22:15:51.429367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.429555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.429584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.469 qpair failed and we were unable to recover it. 00:24:09.469 [2024-04-24 22:15:51.429719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.429899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.429926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.469 qpair failed and we were unable to recover it. 00:24:09.469 [2024-04-24 22:15:51.430089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.430288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.430315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.469 qpair failed and we were unable to recover it. 00:24:09.469 [2024-04-24 22:15:51.430539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.430723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.430750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.469 qpair failed and we were unable to recover it. 00:24:09.469 [2024-04-24 22:15:51.430911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.431064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.431090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.469 qpair failed and we were unable to recover it. 00:24:09.469 [2024-04-24 22:15:51.431291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.431481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.431509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.469 qpair failed and we were unable to recover it. 00:24:09.469 [2024-04-24 22:15:51.431693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.431879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.431906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.469 qpair failed and we were unable to recover it. 00:24:09.469 [2024-04-24 22:15:51.432085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.432294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.432321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.469 qpair failed and we were unable to recover it. 00:24:09.469 [2024-04-24 22:15:51.432493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.432685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.432712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.469 qpair failed and we were unable to recover it. 00:24:09.469 [2024-04-24 22:15:51.432846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.433028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.433055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.469 qpair failed and we were unable to recover it. 00:24:09.469 [2024-04-24 22:15:51.433225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.433423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.433451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.469 qpair failed and we were unable to recover it. 00:24:09.469 [2024-04-24 22:15:51.433611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.433780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.433807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.469 qpair failed and we were unable to recover it. 00:24:09.469 [2024-04-24 22:15:51.433969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.434129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.434156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.469 qpair failed and we were unable to recover it. 00:24:09.469 [2024-04-24 22:15:51.434341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.434499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.434528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.469 qpair failed and we were unable to recover it. 00:24:09.469 [2024-04-24 22:15:51.434689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.434882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.434909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.469 qpair failed and we were unable to recover it. 00:24:09.469 [2024-04-24 22:15:51.435073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.435234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.435262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.469 qpair failed and we were unable to recover it. 00:24:09.469 [2024-04-24 22:15:51.435461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.435620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.469 [2024-04-24 22:15:51.435647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.469 qpair failed and we were unable to recover it. 00:24:09.469 [2024-04-24 22:15:51.435824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.435997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.436024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.470 qpair failed and we were unable to recover it. 00:24:09.470 [2024-04-24 22:15:51.436187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.436319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.436346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.470 qpair failed and we were unable to recover it. 00:24:09.470 [2024-04-24 22:15:51.436541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.436711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.436739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.470 qpair failed and we were unable to recover it. 00:24:09.470 [2024-04-24 22:15:51.436903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.437062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.437089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.470 qpair failed and we were unable to recover it. 00:24:09.470 [2024-04-24 22:15:51.437252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.437413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.437441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.470 qpair failed and we were unable to recover it. 00:24:09.470 [2024-04-24 22:15:51.437607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.437796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.437822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.470 qpair failed and we were unable to recover it. 00:24:09.470 [2024-04-24 22:15:51.437954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.438109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.438136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.470 qpair failed and we were unable to recover it. 00:24:09.470 [2024-04-24 22:15:51.438302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.438462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.438490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.470 qpair failed and we were unable to recover it. 00:24:09.470 [2024-04-24 22:15:51.438677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.438807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.438835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.470 qpair failed and we were unable to recover it. 00:24:09.470 [2024-04-24 22:15:51.439026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.439188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.439215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.470 qpair failed and we were unable to recover it. 00:24:09.470 [2024-04-24 22:15:51.439405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.439591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.439618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.470 qpair failed and we were unable to recover it. 00:24:09.470 [2024-04-24 22:15:51.439802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.439966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.439992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.470 qpair failed and we were unable to recover it. 00:24:09.470 [2024-04-24 22:15:51.440158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.440320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.440348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.470 qpair failed and we were unable to recover it. 00:24:09.470 [2024-04-24 22:15:51.440516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.440650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.440677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.470 qpair failed and we were unable to recover it. 00:24:09.470 [2024-04-24 22:15:51.440837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.440995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.441022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.470 qpair failed and we were unable to recover it. 00:24:09.470 [2024-04-24 22:15:51.441210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.441365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.441392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.470 qpair failed and we were unable to recover it. 00:24:09.470 [2024-04-24 22:15:51.441575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.441730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.441757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.470 qpair failed and we were unable to recover it. 00:24:09.470 [2024-04-24 22:15:51.441876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.442065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.442092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.470 qpair failed and we were unable to recover it. 00:24:09.470 [2024-04-24 22:15:51.442275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.442433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.442462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.470 qpair failed and we were unable to recover it. 00:24:09.470 [2024-04-24 22:15:51.442593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.442758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.442785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.470 qpair failed and we were unable to recover it. 00:24:09.470 [2024-04-24 22:15:51.442973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.443134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.443161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.470 qpair failed and we were unable to recover it. 00:24:09.470 [2024-04-24 22:15:51.443347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.443541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.443568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.470 qpair failed and we were unable to recover it. 00:24:09.470 [2024-04-24 22:15:51.443728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.443899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.443926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.470 qpair failed and we were unable to recover it. 00:24:09.470 [2024-04-24 22:15:51.444097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.444255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.444282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.470 qpair failed and we were unable to recover it. 00:24:09.470 [2024-04-24 22:15:51.444441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.444610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.444637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.470 qpair failed and we were unable to recover it. 00:24:09.470 [2024-04-24 22:15:51.444822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.444949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.444975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.470 qpair failed and we were unable to recover it. 00:24:09.470 [2024-04-24 22:15:51.445167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.445350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.445377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.470 qpair failed and we were unable to recover it. 00:24:09.470 [2024-04-24 22:15:51.445547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.445730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.445757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.470 qpair failed and we were unable to recover it. 00:24:09.470 [2024-04-24 22:15:51.445912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.446035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.470 [2024-04-24 22:15:51.446062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.470 qpair failed and we were unable to recover it. 00:24:09.471 [2024-04-24 22:15:51.446250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.446452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.446480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.471 qpair failed and we were unable to recover it. 00:24:09.471 [2024-04-24 22:15:51.446675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.446829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.446856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.471 qpair failed and we were unable to recover it. 00:24:09.471 [2024-04-24 22:15:51.447033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.447197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.447224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.471 qpair failed and we were unable to recover it. 00:24:09.471 [2024-04-24 22:15:51.447425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.447586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.447618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.471 qpair failed and we were unable to recover it. 00:24:09.471 [2024-04-24 22:15:51.447817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.447984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.448011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.471 qpair failed and we were unable to recover it. 00:24:09.471 [2024-04-24 22:15:51.448207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.448413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.448447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.471 qpair failed and we were unable to recover it. 00:24:09.471 [2024-04-24 22:15:51.448644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.448853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.448880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.471 qpair failed and we were unable to recover it. 00:24:09.471 [2024-04-24 22:15:51.449044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.449213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.449240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.471 qpair failed and we were unable to recover it. 00:24:09.471 [2024-04-24 22:15:51.449457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.449606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.449633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.471 qpair failed and we were unable to recover it. 00:24:09.471 [2024-04-24 22:15:51.449846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.450009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.450036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.471 qpair failed and we were unable to recover it. 00:24:09.471 [2024-04-24 22:15:51.450171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.450382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.450419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.471 qpair failed and we were unable to recover it. 00:24:09.471 [2024-04-24 22:15:51.450574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.450756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.450783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.471 qpair failed and we were unable to recover it. 00:24:09.471 [2024-04-24 22:15:51.450993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.451181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.451208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.471 qpair failed and we were unable to recover it. 00:24:09.471 [2024-04-24 22:15:51.451389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.451563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.451595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.471 qpair failed and we were unable to recover it. 00:24:09.471 [2024-04-24 22:15:51.451806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.452002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.452029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.471 qpair failed and we were unable to recover it. 00:24:09.471 [2024-04-24 22:15:51.452232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.452442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.452470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.471 qpair failed and we were unable to recover it. 00:24:09.471 [2024-04-24 22:15:51.452658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.452874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.452901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.471 qpair failed and we were unable to recover it. 00:24:09.471 [2024-04-24 22:15:51.453115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.453317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.453344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.471 qpair failed and we were unable to recover it. 00:24:09.471 [2024-04-24 22:15:51.453525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.453727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.453753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.471 qpair failed and we were unable to recover it. 00:24:09.471 [2024-04-24 22:15:51.453959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.454172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.454199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.471 qpair failed and we were unable to recover it. 00:24:09.471 [2024-04-24 22:15:51.454480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.454654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.454681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.471 qpair failed and we were unable to recover it. 00:24:09.471 [2024-04-24 22:15:51.454858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.455037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.455064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.471 qpair failed and we were unable to recover it. 00:24:09.471 [2024-04-24 22:15:51.455224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.455479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.455507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.471 qpair failed and we were unable to recover it. 00:24:09.471 [2024-04-24 22:15:51.455693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.455873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.455905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.471 qpair failed and we were unable to recover it. 00:24:09.471 [2024-04-24 22:15:51.456100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.456314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.456341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.471 qpair failed and we were unable to recover it. 00:24:09.471 [2024-04-24 22:15:51.456498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.456665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.456692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.471 qpair failed and we were unable to recover it. 00:24:09.471 [2024-04-24 22:15:51.456851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.457052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.457079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.471 qpair failed and we were unable to recover it. 00:24:09.471 [2024-04-24 22:15:51.457331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.457495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.457523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.471 qpair failed and we were unable to recover it. 00:24:09.471 [2024-04-24 22:15:51.457695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.471 [2024-04-24 22:15:51.457900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.457927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.472 qpair failed and we were unable to recover it. 00:24:09.472 [2024-04-24 22:15:51.458126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.458301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.458328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.472 qpair failed and we were unable to recover it. 00:24:09.472 [2024-04-24 22:15:51.458475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.458613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.458640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.472 qpair failed and we were unable to recover it. 00:24:09.472 [2024-04-24 22:15:51.458826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.459031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.459058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.472 qpair failed and we were unable to recover it. 00:24:09.472 [2024-04-24 22:15:51.459262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.459445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.459473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.472 qpair failed and we were unable to recover it. 00:24:09.472 [2024-04-24 22:15:51.459657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.459827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.459854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.472 qpair failed and we were unable to recover it. 00:24:09.472 [2024-04-24 22:15:51.460056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.460228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.460255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.472 qpair failed and we were unable to recover it. 00:24:09.472 [2024-04-24 22:15:51.460417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.460573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.460601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.472 qpair failed and we were unable to recover it. 00:24:09.472 [2024-04-24 22:15:51.460791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.460984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.461011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.472 qpair failed and we were unable to recover it. 00:24:09.472 [2024-04-24 22:15:51.461249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.461409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.461438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.472 qpair failed and we were unable to recover it. 00:24:09.472 [2024-04-24 22:15:51.461603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.461791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.461818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.472 qpair failed and we were unable to recover it. 00:24:09.472 [2024-04-24 22:15:51.461976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.462157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.462184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.472 qpair failed and we were unable to recover it. 00:24:09.472 [2024-04-24 22:15:51.462366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.462559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.462587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.472 qpair failed and we were unable to recover it. 00:24:09.472 [2024-04-24 22:15:51.462752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.462951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.462978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.472 qpair failed and we were unable to recover it. 00:24:09.472 [2024-04-24 22:15:51.463120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.463319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.463345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.472 qpair failed and we were unable to recover it. 00:24:09.472 [2024-04-24 22:15:51.463560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.463716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.463743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.472 qpair failed and we were unable to recover it. 00:24:09.472 [2024-04-24 22:15:51.463944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.464140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.464167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.472 qpair failed and we were unable to recover it. 00:24:09.472 [2024-04-24 22:15:51.464375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.464533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.464560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.472 qpair failed and we were unable to recover it. 00:24:09.472 [2024-04-24 22:15:51.464765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.464930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.464958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.472 qpair failed and we were unable to recover it. 00:24:09.472 [2024-04-24 22:15:51.465083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.465225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.465252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.472 qpair failed and we were unable to recover it. 00:24:09.472 [2024-04-24 22:15:51.465465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.465667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.465695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.472 qpair failed and we were unable to recover it. 00:24:09.472 [2024-04-24 22:15:51.465876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.466038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.466065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.472 qpair failed and we were unable to recover it. 00:24:09.472 [2024-04-24 22:15:51.466234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.466469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.466497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.472 qpair failed and we were unable to recover it. 00:24:09.472 [2024-04-24 22:15:51.466708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.466905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.466931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.472 qpair failed and we were unable to recover it. 00:24:09.472 [2024-04-24 22:15:51.467075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.467200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.467226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.472 qpair failed and we were unable to recover it. 00:24:09.472 [2024-04-24 22:15:51.467441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.467602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.467629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.472 qpair failed and we were unable to recover it. 00:24:09.472 [2024-04-24 22:15:51.467827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.468041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.468069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.472 qpair failed and we were unable to recover it. 00:24:09.472 [2024-04-24 22:15:51.468264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.468475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.468504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.472 qpair failed and we were unable to recover it. 00:24:09.472 [2024-04-24 22:15:51.468691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.468895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.468922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.472 qpair failed and we were unable to recover it. 00:24:09.472 [2024-04-24 22:15:51.469093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.472 [2024-04-24 22:15:51.469272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.469299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.473 qpair failed and we were unable to recover it. 00:24:09.473 [2024-04-24 22:15:51.469462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.469637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.469664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.473 qpair failed and we were unable to recover it. 00:24:09.473 [2024-04-24 22:15:51.469843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.470050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.470077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.473 qpair failed and we were unable to recover it. 00:24:09.473 [2024-04-24 22:15:51.470251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.470392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.470426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.473 qpair failed and we were unable to recover it. 00:24:09.473 [2024-04-24 22:15:51.470634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.470845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.470872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.473 qpair failed and we were unable to recover it. 00:24:09.473 [2024-04-24 22:15:51.471057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.471236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.471263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.473 qpair failed and we were unable to recover it. 00:24:09.473 [2024-04-24 22:15:51.471444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.471631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.471658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.473 qpair failed and we were unable to recover it. 00:24:09.473 [2024-04-24 22:15:51.471833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.472004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.472032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.473 qpair failed and we were unable to recover it. 00:24:09.473 [2024-04-24 22:15:51.472165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.472376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.472410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.473 qpair failed and we were unable to recover it. 00:24:09.473 [2024-04-24 22:15:51.472583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.472739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.472765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.473 qpair failed and we were unable to recover it. 00:24:09.473 [2024-04-24 22:15:51.472960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.473120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.473147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.473 qpair failed and we were unable to recover it. 00:24:09.473 [2024-04-24 22:15:51.473306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.473472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.473499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.473 qpair failed and we were unable to recover it. 00:24:09.473 [2024-04-24 22:15:51.473694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.473855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.473882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.473 qpair failed and we were unable to recover it. 00:24:09.473 [2024-04-24 22:15:51.474107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.474311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.474336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.473 qpair failed and we were unable to recover it. 00:24:09.473 [2024-04-24 22:15:51.474546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.474724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.474751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.473 qpair failed and we were unable to recover it. 00:24:09.473 [2024-04-24 22:15:51.474926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.475101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.475128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.473 qpair failed and we were unable to recover it. 00:24:09.473 [2024-04-24 22:15:51.475308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.475466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.475513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.473 qpair failed and we were unable to recover it. 00:24:09.473 [2024-04-24 22:15:51.475704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.475866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.475893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.473 qpair failed and we were unable to recover it. 00:24:09.473 [2024-04-24 22:15:51.476056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.476241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.476268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.473 qpair failed and we were unable to recover it. 00:24:09.473 [2024-04-24 22:15:51.476429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.476590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.476617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.473 qpair failed and we were unable to recover it. 00:24:09.473 [2024-04-24 22:15:51.476818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.477000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.477026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.473 qpair failed and we were unable to recover it. 00:24:09.473 [2024-04-24 22:15:51.477205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.477366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.477405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.473 qpair failed and we were unable to recover it. 00:24:09.473 [2024-04-24 22:15:51.477546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.473 [2024-04-24 22:15:51.477792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.477819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.474 qpair failed and we were unable to recover it. 00:24:09.474 [2024-04-24 22:15:51.477955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.478184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.478211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.474 qpair failed and we were unable to recover it. 00:24:09.474 [2024-04-24 22:15:51.478410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.478573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.478599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.474 qpair failed and we were unable to recover it. 00:24:09.474 [2024-04-24 22:15:51.478791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.478954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.478981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.474 qpair failed and we were unable to recover it. 00:24:09.474 [2024-04-24 22:15:51.479170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.479318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.479344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.474 qpair failed and we were unable to recover it. 00:24:09.474 [2024-04-24 22:15:51.479568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.479835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.479863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.474 qpair failed and we were unable to recover it. 00:24:09.474 [2024-04-24 22:15:51.480060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.480261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.480288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.474 qpair failed and we were unable to recover it. 00:24:09.474 [2024-04-24 22:15:51.480488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.480653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.480679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.474 qpair failed and we were unable to recover it. 00:24:09.474 [2024-04-24 22:15:51.480860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.481013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.481044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.474 qpair failed and we were unable to recover it. 00:24:09.474 [2024-04-24 22:15:51.481254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.481465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.481493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.474 qpair failed and we were unable to recover it. 00:24:09.474 [2024-04-24 22:15:51.481683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.481856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.481884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.474 qpair failed and we were unable to recover it. 00:24:09.474 [2024-04-24 22:15:51.482079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.482299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.482326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.474 qpair failed and we were unable to recover it. 00:24:09.474 [2024-04-24 22:15:51.482487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.482648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.482675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.474 qpair failed and we were unable to recover it. 00:24:09.474 [2024-04-24 22:15:51.482850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.483047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.483074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.474 qpair failed and we were unable to recover it. 00:24:09.474 [2024-04-24 22:15:51.483244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.483403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.483430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.474 qpair failed and we were unable to recover it. 00:24:09.474 [2024-04-24 22:15:51.483554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.483740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.483767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.474 qpair failed and we were unable to recover it. 00:24:09.474 [2024-04-24 22:15:51.483959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.484137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.484163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.474 qpair failed and we were unable to recover it. 00:24:09.474 [2024-04-24 22:15:51.484370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.484556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.484584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.474 qpair failed and we were unable to recover it. 00:24:09.474 [2024-04-24 22:15:51.484742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.484903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.484929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.474 qpair failed and we were unable to recover it. 00:24:09.474 [2024-04-24 22:15:51.485140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.485392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.485439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.474 qpair failed and we were unable to recover it. 00:24:09.474 [2024-04-24 22:15:51.485649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.485821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.485848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.474 qpair failed and we were unable to recover it. 00:24:09.474 [2024-04-24 22:15:51.486038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.486202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.486228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.474 qpair failed and we were unable to recover it. 00:24:09.474 [2024-04-24 22:15:51.486405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.486617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.486644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.474 qpair failed and we were unable to recover it. 00:24:09.474 [2024-04-24 22:15:51.486817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.487026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.487052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.474 qpair failed and we were unable to recover it. 00:24:09.474 [2024-04-24 22:15:51.487249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.487446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.487474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.474 qpair failed and we were unable to recover it. 00:24:09.474 [2024-04-24 22:15:51.487616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.487813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.487840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.474 qpair failed and we were unable to recover it. 00:24:09.474 [2024-04-24 22:15:51.488024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.488198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.488225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.474 qpair failed and we were unable to recover it. 00:24:09.474 [2024-04-24 22:15:51.488400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.488572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.488598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.474 qpair failed and we were unable to recover it. 00:24:09.474 [2024-04-24 22:15:51.488807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.489032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.489059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.474 qpair failed and we were unable to recover it. 00:24:09.474 [2024-04-24 22:15:51.489218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.474 [2024-04-24 22:15:51.489381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.489427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.475 qpair failed and we were unable to recover it. 00:24:09.475 [2024-04-24 22:15:51.489619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.489783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.489809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.475 qpair failed and we were unable to recover it. 00:24:09.475 [2024-04-24 22:15:51.489984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.490167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.490194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.475 qpair failed and we were unable to recover it. 00:24:09.475 [2024-04-24 22:15:51.490401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.490604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.490630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.475 qpair failed and we were unable to recover it. 00:24:09.475 [2024-04-24 22:15:51.490816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.491011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.491038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.475 qpair failed and we were unable to recover it. 00:24:09.475 [2024-04-24 22:15:51.491223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.491409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.491437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.475 qpair failed and we were unable to recover it. 00:24:09.475 [2024-04-24 22:15:51.491665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.491855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.491882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.475 qpair failed and we were unable to recover it. 00:24:09.475 [2024-04-24 22:15:51.492095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.492306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.492333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.475 qpair failed and we were unable to recover it. 00:24:09.475 [2024-04-24 22:15:51.492520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.492681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.492707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.475 qpair failed and we were unable to recover it. 00:24:09.475 [2024-04-24 22:15:51.492864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.493049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.493076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.475 qpair failed and we were unable to recover it. 00:24:09.475 [2024-04-24 22:15:51.493262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.493516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.493545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.475 qpair failed and we were unable to recover it. 00:24:09.475 [2024-04-24 22:15:51.493782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.493946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.493972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.475 qpair failed and we were unable to recover it. 00:24:09.475 [2024-04-24 22:15:51.494100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.494299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.494326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.475 qpair failed and we were unable to recover it. 00:24:09.475 [2024-04-24 22:15:51.494498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.494653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.494681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.475 qpair failed and we were unable to recover it. 00:24:09.475 [2024-04-24 22:15:51.494890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.495025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.495051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.475 qpair failed and we were unable to recover it. 00:24:09.475 [2024-04-24 22:15:51.495260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.495521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.495549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.475 qpair failed and we were unable to recover it. 00:24:09.475 [2024-04-24 22:15:51.495709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.495874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.495902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.475 qpair failed and we were unable to recover it. 00:24:09.475 [2024-04-24 22:15:51.496061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.496250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.496276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.475 qpair failed and we were unable to recover it. 00:24:09.475 [2024-04-24 22:15:51.496440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.496652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.496679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.475 qpair failed and we were unable to recover it. 00:24:09.475 [2024-04-24 22:15:51.496866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.497072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.497098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.475 qpair failed and we were unable to recover it. 00:24:09.475 [2024-04-24 22:15:51.497287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.497521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.497549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.475 qpair failed and we were unable to recover it. 00:24:09.475 [2024-04-24 22:15:51.497714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.497870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.497895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.475 qpair failed and we were unable to recover it. 00:24:09.475 [2024-04-24 22:15:51.498045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.498201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.498236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.475 qpair failed and we were unable to recover it. 00:24:09.475 [2024-04-24 22:15:51.498380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.498602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.498628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.475 qpair failed and we were unable to recover it. 00:24:09.475 [2024-04-24 22:15:51.498783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.498971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.498999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.475 qpair failed and we were unable to recover it. 00:24:09.475 [2024-04-24 22:15:51.499202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.499409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.499436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.475 qpair failed and we were unable to recover it. 00:24:09.475 [2024-04-24 22:15:51.499614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.499788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.499816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.475 qpair failed and we were unable to recover it. 00:24:09.475 [2024-04-24 22:15:51.500027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.500206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.500232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.475 qpair failed and we were unable to recover it. 00:24:09.475 [2024-04-24 22:15:51.500365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.500569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.475 [2024-04-24 22:15:51.500597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.475 qpair failed and we were unable to recover it. 00:24:09.475 [2024-04-24 22:15:51.500761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.500966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.500993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.476 qpair failed and we were unable to recover it. 00:24:09.476 [2024-04-24 22:15:51.501198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.501383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.501425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.476 qpair failed and we were unable to recover it. 00:24:09.476 [2024-04-24 22:15:51.501581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.501708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.501734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.476 qpair failed and we were unable to recover it. 00:24:09.476 [2024-04-24 22:15:51.501867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.502056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.502083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.476 qpair failed and we were unable to recover it. 00:24:09.476 [2024-04-24 22:15:51.502278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.502470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.502498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.476 qpair failed and we were unable to recover it. 00:24:09.476 [2024-04-24 22:15:51.502705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.502859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.502885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.476 qpair failed and we were unable to recover it. 00:24:09.476 [2024-04-24 22:15:51.503046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.503199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.503226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.476 qpair failed and we were unable to recover it. 00:24:09.476 [2024-04-24 22:15:51.503390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.503608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.503636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.476 qpair failed and we were unable to recover it. 00:24:09.476 [2024-04-24 22:15:51.503807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.504016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.504042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.476 qpair failed and we were unable to recover it. 00:24:09.476 [2024-04-24 22:15:51.504256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.504431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.504458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.476 qpair failed and we were unable to recover it. 00:24:09.476 [2024-04-24 22:15:51.504658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.504853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.504880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.476 qpair failed and we were unable to recover it. 00:24:09.476 [2024-04-24 22:15:51.505025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.505196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.505223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.476 qpair failed and we were unable to recover it. 00:24:09.476 [2024-04-24 22:15:51.505426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.505581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.505608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.476 qpair failed and we were unable to recover it. 00:24:09.476 [2024-04-24 22:15:51.505738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.505865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.505891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.476 qpair failed and we were unable to recover it. 00:24:09.476 [2024-04-24 22:15:51.506084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.506269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.506295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.476 qpair failed and we were unable to recover it. 00:24:09.476 [2024-04-24 22:15:51.506522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.506712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.506738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.476 qpair failed and we were unable to recover it. 00:24:09.476 [2024-04-24 22:15:51.506945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.507099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.507126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.476 qpair failed and we were unable to recover it. 00:24:09.476 [2024-04-24 22:15:51.507311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.507486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.507518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.476 qpair failed and we were unable to recover it. 00:24:09.476 [2024-04-24 22:15:51.507677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.507858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.507885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.476 qpair failed and we were unable to recover it. 00:24:09.476 [2024-04-24 22:15:51.508070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.508268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.508294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.476 qpair failed and we were unable to recover it. 00:24:09.476 [2024-04-24 22:15:51.508511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.508669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.508695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.476 qpair failed and we were unable to recover it. 00:24:09.476 [2024-04-24 22:15:51.508915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.509134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.509161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.476 qpair failed and we were unable to recover it. 00:24:09.476 [2024-04-24 22:15:51.509369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.509553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.509581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.476 qpair failed and we were unable to recover it. 00:24:09.476 [2024-04-24 22:15:51.509797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.509989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.510016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.476 qpair failed and we were unable to recover it. 00:24:09.476 [2024-04-24 22:15:51.510220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.510389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.510425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.476 qpair failed and we were unable to recover it. 00:24:09.476 [2024-04-24 22:15:51.510624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.510798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.510824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.476 qpair failed and we were unable to recover it. 00:24:09.476 [2024-04-24 22:15:51.511021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.511222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.511249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.476 qpair failed and we were unable to recover it. 00:24:09.476 [2024-04-24 22:15:51.511506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.511681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.511713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.476 qpair failed and we were unable to recover it. 00:24:09.476 [2024-04-24 22:15:51.511860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.512020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.476 [2024-04-24 22:15:51.512047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.476 qpair failed and we were unable to recover it. 00:24:09.477 [2024-04-24 22:15:51.512252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.512401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.512429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.477 qpair failed and we were unable to recover it. 00:24:09.477 [2024-04-24 22:15:51.512638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.512828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.512854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.477 qpair failed and we were unable to recover it. 00:24:09.477 [2024-04-24 22:15:51.513054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.513223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.513250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.477 qpair failed and we were unable to recover it. 00:24:09.477 [2024-04-24 22:15:51.513399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.513596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.513623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.477 qpair failed and we were unable to recover it. 00:24:09.477 [2024-04-24 22:15:51.513830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.514019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.514046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.477 qpair failed and we were unable to recover it. 00:24:09.477 [2024-04-24 22:15:51.514232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.514418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.514446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.477 qpair failed and we were unable to recover it. 00:24:09.477 [2024-04-24 22:15:51.514607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.514794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.514821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.477 qpair failed and we were unable to recover it. 00:24:09.477 [2024-04-24 22:15:51.514990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.515174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.515200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.477 qpair failed and we were unable to recover it. 00:24:09.477 [2024-04-24 22:15:51.515374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.515559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.515591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.477 qpair failed and we were unable to recover it. 00:24:09.477 [2024-04-24 22:15:51.515874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.516076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.516103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.477 qpair failed and we were unable to recover it. 00:24:09.477 [2024-04-24 22:15:51.516273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.516474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.516502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.477 qpair failed and we were unable to recover it. 00:24:09.477 [2024-04-24 22:15:51.516703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.516874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.516900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.477 qpair failed and we were unable to recover it. 00:24:09.477 [2024-04-24 22:15:51.517110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.517272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.517299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.477 qpair failed and we were unable to recover it. 00:24:09.477 [2024-04-24 22:15:51.517524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.517678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.517705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.477 qpair failed and we were unable to recover it. 00:24:09.477 [2024-04-24 22:15:51.517836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.518037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.518064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.477 qpair failed and we were unable to recover it. 00:24:09.477 [2024-04-24 22:15:51.518234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.518370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.518405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.477 qpair failed and we were unable to recover it. 00:24:09.477 [2024-04-24 22:15:51.518602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.518802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.518829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.477 qpair failed and we were unable to recover it. 00:24:09.477 [2024-04-24 22:15:51.519086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.519215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.519242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.477 qpair failed and we were unable to recover it. 00:24:09.477 [2024-04-24 22:15:51.519416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.519621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.519653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.477 qpair failed and we were unable to recover it. 00:24:09.477 [2024-04-24 22:15:51.519797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.520006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.520044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.477 qpair failed and we were unable to recover it. 00:24:09.477 [2024-04-24 22:15:51.520243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.520461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.520489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.477 qpair failed and we were unable to recover it. 00:24:09.477 [2024-04-24 22:15:51.520686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.520896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.520922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.477 qpair failed and we were unable to recover it. 00:24:09.477 [2024-04-24 22:15:51.521049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.521269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.521296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.477 qpair failed and we were unable to recover it. 00:24:09.477 [2024-04-24 22:15:51.521448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.521665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.477 [2024-04-24 22:15:51.521692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.477 qpair failed and we were unable to recover it. 00:24:09.477 [2024-04-24 22:15:51.521888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.522051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.522078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.478 qpair failed and we were unable to recover it. 00:24:09.478 [2024-04-24 22:15:51.522250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.522446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.522474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.478 qpair failed and we were unable to recover it. 00:24:09.478 [2024-04-24 22:15:51.522646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.522843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.522870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.478 qpair failed and we were unable to recover it. 00:24:09.478 [2024-04-24 22:15:51.523111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.523292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.523319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.478 qpair failed and we were unable to recover it. 00:24:09.478 [2024-04-24 22:15:51.523494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.523672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.523700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.478 qpair failed and we were unable to recover it. 00:24:09.478 [2024-04-24 22:15:51.523919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.524104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.524131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.478 qpair failed and we were unable to recover it. 00:24:09.478 [2024-04-24 22:15:51.524293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.524490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.524519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.478 qpair failed and we were unable to recover it. 00:24:09.478 [2024-04-24 22:15:51.524690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.524850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.524877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.478 qpair failed and we were unable to recover it. 00:24:09.478 [2024-04-24 22:15:51.525059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.525240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.525267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.478 qpair failed and we were unable to recover it. 00:24:09.478 [2024-04-24 22:15:51.525419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.525608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.525646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.478 qpair failed and we were unable to recover it. 00:24:09.478 [2024-04-24 22:15:51.525856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.526067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.526094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.478 qpair failed and we were unable to recover it. 00:24:09.478 [2024-04-24 22:15:51.526283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.526481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.526509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.478 qpair failed and we were unable to recover it. 00:24:09.478 [2024-04-24 22:15:51.526758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.526967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.526994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.478 qpair failed and we were unable to recover it. 00:24:09.478 [2024-04-24 22:15:51.527177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.527377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.527416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.478 qpair failed and we were unable to recover it. 00:24:09.478 [2024-04-24 22:15:51.527638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.527809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.527837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.478 qpair failed and we were unable to recover it. 00:24:09.478 [2024-04-24 22:15:51.528048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.528189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.528217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.478 qpair failed and we were unable to recover it. 00:24:09.478 [2024-04-24 22:15:51.528391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.528560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.528587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.478 qpair failed and we were unable to recover it. 00:24:09.478 [2024-04-24 22:15:51.528733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.528938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.528965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.478 qpair failed and we were unable to recover it. 00:24:09.478 [2024-04-24 22:15:51.529139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.529347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.529374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.478 qpair failed and we were unable to recover it. 00:24:09.478 [2024-04-24 22:15:51.529580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.529721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.529748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.478 qpair failed and we were unable to recover it. 00:24:09.478 [2024-04-24 22:15:51.529914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.530112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.530139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.478 qpair failed and we were unable to recover it. 00:24:09.478 [2024-04-24 22:15:51.530343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.530518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.530546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.478 qpair failed and we were unable to recover it. 00:24:09.478 [2024-04-24 22:15:51.530714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.530929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.530955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.478 qpair failed and we were unable to recover it. 00:24:09.478 [2024-04-24 22:15:51.531145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.531338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.531365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.478 qpair failed and we were unable to recover it. 00:24:09.478 [2024-04-24 22:15:51.531549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.531758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.531785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.478 qpair failed and we were unable to recover it. 00:24:09.478 [2024-04-24 22:15:51.531950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.532144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.532171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.478 qpair failed and we were unable to recover it. 00:24:09.478 [2024-04-24 22:15:51.532338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.532486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.532513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.478 qpair failed and we were unable to recover it. 00:24:09.478 [2024-04-24 22:15:51.532685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.532849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.532876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.478 qpair failed and we were unable to recover it. 00:24:09.478 [2024-04-24 22:15:51.533047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.533183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.478 [2024-04-24 22:15:51.533210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.478 qpair failed and we were unable to recover it. 00:24:09.479 [2024-04-24 22:15:51.533376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.533581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.533608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.479 qpair failed and we were unable to recover it. 00:24:09.479 [2024-04-24 22:15:51.533810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.533977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.534004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.479 qpair failed and we were unable to recover it. 00:24:09.479 [2024-04-24 22:15:51.534204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.534382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.534419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.479 qpair failed and we were unable to recover it. 00:24:09.479 [2024-04-24 22:15:51.534605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.534794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.534820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.479 qpair failed and we were unable to recover it. 00:24:09.479 [2024-04-24 22:15:51.534971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.535147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.535174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.479 qpair failed and we were unable to recover it. 00:24:09.479 [2024-04-24 22:15:51.535386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.535554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.535581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.479 qpair failed and we were unable to recover it. 00:24:09.479 [2024-04-24 22:15:51.535811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.535975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.536003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.479 qpair failed and we were unable to recover it. 00:24:09.479 [2024-04-24 22:15:51.536221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.536438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.536466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.479 qpair failed and we were unable to recover it. 00:24:09.479 [2024-04-24 22:15:51.536645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.536856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.536883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.479 qpair failed and we were unable to recover it. 00:24:09.479 [2024-04-24 22:15:51.537056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.537248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.537275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.479 qpair failed and we were unable to recover it. 00:24:09.479 [2024-04-24 22:15:51.537421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.537618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.537646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.479 qpair failed and we were unable to recover it. 00:24:09.479 [2024-04-24 22:15:51.537821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.537981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.538008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.479 qpair failed and we were unable to recover it. 00:24:09.479 [2024-04-24 22:15:51.538176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.538373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.538406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.479 qpair failed and we were unable to recover it. 00:24:09.479 [2024-04-24 22:15:51.538608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.538773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.538800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.479 qpair failed and we were unable to recover it. 00:24:09.479 [2024-04-24 22:15:51.538967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.539209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.539236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.479 qpair failed and we were unable to recover it. 00:24:09.479 [2024-04-24 22:15:51.539413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.539628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.539655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.479 qpair failed and we were unable to recover it. 00:24:09.479 [2024-04-24 22:15:51.539871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.540032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.540058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.479 qpair failed and we were unable to recover it. 00:24:09.479 [2024-04-24 22:15:51.540190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.540349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.540376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.479 qpair failed and we were unable to recover it. 00:24:09.479 [2024-04-24 22:15:51.540534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.540731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.540758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.479 qpair failed and we were unable to recover it. 00:24:09.479 [2024-04-24 22:15:51.540957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.541162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.541189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.479 qpair failed and we were unable to recover it. 00:24:09.479 [2024-04-24 22:15:51.541403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.541623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.541651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.479 qpair failed and we were unable to recover it. 00:24:09.479 [2024-04-24 22:15:51.541785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.541987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.542014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.479 qpair failed and we were unable to recover it. 00:24:09.479 [2024-04-24 22:15:51.542201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.542403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.542431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.479 qpair failed and we were unable to recover it. 00:24:09.479 [2024-04-24 22:15:51.542639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.542818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.542844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.479 qpair failed and we were unable to recover it. 00:24:09.479 [2024-04-24 22:15:51.543053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.543265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.543292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.479 qpair failed and we were unable to recover it. 00:24:09.479 [2024-04-24 22:15:51.543513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.543710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.543737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.479 qpair failed and we were unable to recover it. 00:24:09.479 [2024-04-24 22:15:51.543910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.544103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.544129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.479 qpair failed and we were unable to recover it. 00:24:09.479 [2024-04-24 22:15:51.544263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.479 [2024-04-24 22:15:51.544456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.544485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.480 qpair failed and we were unable to recover it. 00:24:09.480 [2024-04-24 22:15:51.544664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.544837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.544865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.480 qpair failed and we were unable to recover it. 00:24:09.480 [2024-04-24 22:15:51.545047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.545253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.545279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.480 qpair failed and we were unable to recover it. 00:24:09.480 [2024-04-24 22:15:51.545443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.545625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.545652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.480 qpair failed and we were unable to recover it. 00:24:09.480 [2024-04-24 22:15:51.545861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.546042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.546069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.480 qpair failed and we were unable to recover it. 00:24:09.480 [2024-04-24 22:15:51.546254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.546449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.546476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.480 qpair failed and we were unable to recover it. 00:24:09.480 [2024-04-24 22:15:51.546672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.546878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.546905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.480 qpair failed and we were unable to recover it. 00:24:09.480 [2024-04-24 22:15:51.547091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.547290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.547316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.480 qpair failed and we were unable to recover it. 00:24:09.480 [2024-04-24 22:15:51.547515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.547677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.547703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.480 qpair failed and we were unable to recover it. 00:24:09.480 [2024-04-24 22:15:51.547974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.548178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.548206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.480 qpair failed and we were unable to recover it. 00:24:09.480 [2024-04-24 22:15:51.548370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.548536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.548563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.480 qpair failed and we were unable to recover it. 00:24:09.480 [2024-04-24 22:15:51.548779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.548948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.548974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.480 qpair failed and we were unable to recover it. 00:24:09.480 [2024-04-24 22:15:51.549180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.549357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.549384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.480 qpair failed and we were unable to recover it. 00:24:09.480 [2024-04-24 22:15:51.549583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.549790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.549816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.480 qpair failed and we were unable to recover it. 00:24:09.480 [2024-04-24 22:15:51.550000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.550203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.550230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.480 qpair failed and we were unable to recover it. 00:24:09.480 [2024-04-24 22:15:51.550430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.550608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.550635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.480 qpair failed and we were unable to recover it. 00:24:09.480 [2024-04-24 22:15:51.550801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.550998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.551025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.480 qpair failed and we were unable to recover it. 00:24:09.480 [2024-04-24 22:15:51.551208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.551367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.551402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.480 qpair failed and we were unable to recover it. 00:24:09.480 [2024-04-24 22:15:51.551617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.551806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.551834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.480 qpair failed and we were unable to recover it. 00:24:09.480 [2024-04-24 22:15:51.552038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.552208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.552234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.480 qpair failed and we were unable to recover it. 00:24:09.480 [2024-04-24 22:15:51.552436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.552559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.552586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.480 qpair failed and we were unable to recover it. 00:24:09.480 [2024-04-24 22:15:51.552774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.552934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.552961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.480 qpair failed and we were unable to recover it. 00:24:09.480 [2024-04-24 22:15:51.553102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.553270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.553296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.480 qpair failed and we were unable to recover it. 00:24:09.480 [2024-04-24 22:15:51.553553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.553732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.553758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.480 qpair failed and we were unable to recover it. 00:24:09.480 [2024-04-24 22:15:51.553891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.554092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.554119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.480 qpair failed and we were unable to recover it. 00:24:09.480 [2024-04-24 22:15:51.554332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.554527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.554554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.480 qpair failed and we were unable to recover it. 00:24:09.480 [2024-04-24 22:15:51.554754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.554896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.554923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.480 qpair failed and we were unable to recover it. 00:24:09.480 [2024-04-24 22:15:51.555129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.555328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.555354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.480 qpair failed and we were unable to recover it. 00:24:09.480 [2024-04-24 22:15:51.555494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.555693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.480 [2024-04-24 22:15:51.555720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.480 qpair failed and we were unable to recover it. 00:24:09.480 [2024-04-24 22:15:51.555904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.556110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.556138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.481 qpair failed and we were unable to recover it. 00:24:09.481 [2024-04-24 22:15:51.556320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.556484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.556512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.481 qpair failed and we were unable to recover it. 00:24:09.481 [2024-04-24 22:15:51.556725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.556905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.556931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.481 qpair failed and we were unable to recover it. 00:24:09.481 [2024-04-24 22:15:51.557104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.557275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.557301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.481 qpair failed and we were unable to recover it. 00:24:09.481 [2024-04-24 22:15:51.557509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.557721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.557747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.481 qpair failed and we were unable to recover it. 00:24:09.481 [2024-04-24 22:15:51.557937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.558112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.558139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.481 qpair failed and we were unable to recover it. 00:24:09.481 [2024-04-24 22:15:51.558324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.558486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.558513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.481 qpair failed and we were unable to recover it. 00:24:09.481 [2024-04-24 22:15:51.558742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.558950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.558976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.481 qpair failed and we were unable to recover it. 00:24:09.481 [2024-04-24 22:15:51.559163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.559365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.559400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.481 qpair failed and we were unable to recover it. 00:24:09.481 [2024-04-24 22:15:51.559610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.559783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.559809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.481 qpair failed and we were unable to recover it. 00:24:09.481 [2024-04-24 22:15:51.559981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.560187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.560214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.481 qpair failed and we were unable to recover it. 00:24:09.481 [2024-04-24 22:15:51.560422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.560584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.560611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.481 qpair failed and we were unable to recover it. 00:24:09.481 [2024-04-24 22:15:51.560781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.560997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.561024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.481 qpair failed and we were unable to recover it. 00:24:09.481 [2024-04-24 22:15:51.561203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.561414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.561442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.481 qpair failed and we were unable to recover it. 00:24:09.481 [2024-04-24 22:15:51.561644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.561857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.561885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.481 qpair failed and we were unable to recover it. 00:24:09.481 [2024-04-24 22:15:51.562086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.562270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.562297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.481 qpair failed and we were unable to recover it. 00:24:09.481 [2024-04-24 22:15:51.562509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.562644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.562670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.481 qpair failed and we were unable to recover it. 00:24:09.481 [2024-04-24 22:15:51.562840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.563037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.563064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.481 qpair failed and we were unable to recover it. 00:24:09.481 [2024-04-24 22:15:51.563234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.563472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.563500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.481 qpair failed and we were unable to recover it. 00:24:09.481 [2024-04-24 22:15:51.563670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.563868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.563895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.481 qpair failed and we were unable to recover it. 00:24:09.481 [2024-04-24 22:15:51.564106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.564296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.564324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.481 qpair failed and we were unable to recover it. 00:24:09.481 [2024-04-24 22:15:51.564525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.564696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.564723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.481 qpair failed and we were unable to recover it. 00:24:09.481 [2024-04-24 22:15:51.564906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.565081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.565108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.481 qpair failed and we were unable to recover it. 00:24:09.481 [2024-04-24 22:15:51.565309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.565469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.565497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.481 qpair failed and we were unable to recover it. 00:24:09.481 [2024-04-24 22:15:51.565690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.565853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.565880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.481 qpair failed and we were unable to recover it. 00:24:09.481 [2024-04-24 22:15:51.566065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.566261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.566287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.481 qpair failed and we were unable to recover it. 00:24:09.481 [2024-04-24 22:15:51.566486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.566663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.566689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.481 qpair failed and we were unable to recover it. 00:24:09.481 [2024-04-24 22:15:51.566865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.481 [2024-04-24 22:15:51.567070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.567097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.482 qpair failed and we were unable to recover it. 00:24:09.482 [2024-04-24 22:15:51.567227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.567426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.567454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.482 qpair failed and we were unable to recover it. 00:24:09.482 [2024-04-24 22:15:51.567664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.567873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.567899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.482 qpair failed and we were unable to recover it. 00:24:09.482 [2024-04-24 22:15:51.568070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.568248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.568276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.482 qpair failed and we were unable to recover it. 00:24:09.482 [2024-04-24 22:15:51.568484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.568657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.568684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.482 qpair failed and we were unable to recover it. 00:24:09.482 [2024-04-24 22:15:51.568890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.569087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.569114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.482 qpair failed and we were unable to recover it. 00:24:09.482 [2024-04-24 22:15:51.569281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.569444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.569472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.482 qpair failed and we were unable to recover it. 00:24:09.482 [2024-04-24 22:15:51.569653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.569836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.569863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.482 qpair failed and we were unable to recover it. 00:24:09.482 [2024-04-24 22:15:51.570048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.570216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.570243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.482 qpair failed and we were unable to recover it. 00:24:09.482 [2024-04-24 22:15:51.570442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.570619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.570646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.482 qpair failed and we were unable to recover it. 00:24:09.482 [2024-04-24 22:15:51.570804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.570954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.570980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.482 qpair failed and we were unable to recover it. 00:24:09.482 [2024-04-24 22:15:51.571127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.571319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.571346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.482 qpair failed and we were unable to recover it. 00:24:09.482 [2024-04-24 22:15:51.571569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.571763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.571790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.482 qpair failed and we were unable to recover it. 00:24:09.482 [2024-04-24 22:15:51.571990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.572172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.572204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.482 qpair failed and we were unable to recover it. 00:24:09.482 [2024-04-24 22:15:51.572417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.572564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.572591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.482 qpair failed and we were unable to recover it. 00:24:09.482 [2024-04-24 22:15:51.572810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.572973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.573000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.482 qpair failed and we were unable to recover it. 00:24:09.482 [2024-04-24 22:15:51.573161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.573360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.573386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.482 qpair failed and we were unable to recover it. 00:24:09.482 [2024-04-24 22:15:51.573606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.573799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.573826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.482 qpair failed and we were unable to recover it. 00:24:09.482 [2024-04-24 22:15:51.574025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.574218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.574245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.482 qpair failed and we were unable to recover it. 00:24:09.482 [2024-04-24 22:15:51.574386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.574564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.574591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.482 qpair failed and we were unable to recover it. 00:24:09.482 [2024-04-24 22:15:51.574803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.575019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.575046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.482 qpair failed and we were unable to recover it. 00:24:09.482 [2024-04-24 22:15:51.575212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.575373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.575407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.482 qpair failed and we were unable to recover it. 00:24:09.482 [2024-04-24 22:15:51.575604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.575771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.575798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.482 qpair failed and we were unable to recover it. 00:24:09.482 [2024-04-24 22:15:51.575995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.576198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.482 [2024-04-24 22:15:51.576230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.482 qpair failed and we were unable to recover it. 00:24:09.483 [2024-04-24 22:15:51.576445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.576622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.576649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.483 qpair failed and we were unable to recover it. 00:24:09.483 [2024-04-24 22:15:51.576784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.576988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.577014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.483 qpair failed and we were unable to recover it. 00:24:09.483 [2024-04-24 22:15:51.577188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.577388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.577422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.483 qpair failed and we were unable to recover it. 00:24:09.483 [2024-04-24 22:15:51.577584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.577718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.577745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.483 qpair failed and we were unable to recover it. 00:24:09.483 [2024-04-24 22:15:51.577945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.578113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.578140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.483 qpair failed and we were unable to recover it. 00:24:09.483 [2024-04-24 22:15:51.578336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.578505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.578533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.483 qpair failed and we were unable to recover it. 00:24:09.483 [2024-04-24 22:15:51.578729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.578902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.578930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.483 qpair failed and we were unable to recover it. 00:24:09.483 [2024-04-24 22:15:51.579127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.579295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.579321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.483 qpair failed and we were unable to recover it. 00:24:09.483 [2024-04-24 22:15:51.579507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.579705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.579733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.483 qpair failed and we were unable to recover it. 00:24:09.483 [2024-04-24 22:15:51.579902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.580100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.580132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.483 qpair failed and we were unable to recover it. 00:24:09.483 [2024-04-24 22:15:51.580322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.580523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.580551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.483 qpair failed and we were unable to recover it. 00:24:09.483 [2024-04-24 22:15:51.580690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.580848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.580875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.483 qpair failed and we were unable to recover it. 00:24:09.483 [2024-04-24 22:15:51.581021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.581204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.581230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.483 qpair failed and we were unable to recover it. 00:24:09.483 [2024-04-24 22:15:51.581420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.581585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.581612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.483 qpair failed and we were unable to recover it. 00:24:09.483 [2024-04-24 22:15:51.581824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.581979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.582005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.483 qpair failed and we were unable to recover it. 00:24:09.483 [2024-04-24 22:15:51.582189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.582379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.582413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.483 qpair failed and we were unable to recover it. 00:24:09.483 [2024-04-24 22:15:51.582624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.582834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.582861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.483 qpair failed and we were unable to recover it. 00:24:09.483 [2024-04-24 22:15:51.583027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.583237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.583264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.483 qpair failed and we were unable to recover it. 00:24:09.483 [2024-04-24 22:15:51.583451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.583637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.583664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.483 qpair failed and we were unable to recover it. 00:24:09.483 [2024-04-24 22:15:51.583832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.584058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.584099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.483 qpair failed and we were unable to recover it. 00:24:09.483 [2024-04-24 22:15:51.584300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.584498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.584526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.483 qpair failed and we were unable to recover it. 00:24:09.483 [2024-04-24 22:15:51.584709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.584850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.584885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.483 qpair failed and we were unable to recover it. 00:24:09.483 [2024-04-24 22:15:51.585045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.585239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.585266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.483 qpair failed and we were unable to recover it. 00:24:09.483 [2024-04-24 22:15:51.585438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.585624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.585651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.483 qpair failed and we were unable to recover it. 00:24:09.483 [2024-04-24 22:15:51.585852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.586051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.586077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.483 qpair failed and we were unable to recover it. 00:24:09.483 [2024-04-24 22:15:51.586203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.586413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.586440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.483 qpair failed and we were unable to recover it. 00:24:09.483 [2024-04-24 22:15:51.586614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.586780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.586807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.483 qpair failed and we were unable to recover it. 00:24:09.483 [2024-04-24 22:15:51.587003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.587177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.483 [2024-04-24 22:15:51.587204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.483 qpair failed and we were unable to recover it. 00:24:09.483 [2024-04-24 22:15:51.587400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.587608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.587635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.484 qpair failed and we were unable to recover it. 00:24:09.484 [2024-04-24 22:15:51.587845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.588016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.588043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.484 qpair failed and we were unable to recover it. 00:24:09.484 [2024-04-24 22:15:51.588213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.588405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.588433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.484 qpair failed and we were unable to recover it. 00:24:09.484 [2024-04-24 22:15:51.588588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.588778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.588805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.484 qpair failed and we were unable to recover it. 00:24:09.484 [2024-04-24 22:15:51.588992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.589197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.589224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.484 qpair failed and we were unable to recover it. 00:24:09.484 [2024-04-24 22:15:51.589422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.589603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.589634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.484 qpair failed and we were unable to recover it. 00:24:09.484 [2024-04-24 22:15:51.589833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.589998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.590025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.484 qpair failed and we were unable to recover it. 00:24:09.484 [2024-04-24 22:15:51.590203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.590385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.590421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.484 qpair failed and we were unable to recover it. 00:24:09.484 [2024-04-24 22:15:51.590594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.590766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.590793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.484 qpair failed and we were unable to recover it. 00:24:09.484 [2024-04-24 22:15:51.590975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.591161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.591187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.484 qpair failed and we were unable to recover it. 00:24:09.484 [2024-04-24 22:15:51.591391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.591567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.591594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.484 qpair failed and we were unable to recover it. 00:24:09.484 [2024-04-24 22:15:51.591731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.591953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.591980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.484 qpair failed and we were unable to recover it. 00:24:09.484 [2024-04-24 22:15:51.592206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.592356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.592383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.484 qpair failed and we were unable to recover it. 00:24:09.484 [2024-04-24 22:15:51.592611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.592818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.592845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.484 qpair failed and we were unable to recover it. 00:24:09.484 [2024-04-24 22:15:51.593000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.593205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.593232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.484 qpair failed and we were unable to recover it. 00:24:09.484 [2024-04-24 22:15:51.593410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.593611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.593638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.484 qpair failed and we were unable to recover it. 00:24:09.484 [2024-04-24 22:15:51.593807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.594004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.594031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.484 qpair failed and we were unable to recover it. 00:24:09.484 [2024-04-24 22:15:51.594206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.594402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.594441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.484 qpair failed and we were unable to recover it. 00:24:09.484 [2024-04-24 22:15:51.594614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.594782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.594808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.484 qpair failed and we were unable to recover it. 00:24:09.484 [2024-04-24 22:15:51.594971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.595131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.595157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.484 qpair failed and we were unable to recover it. 00:24:09.484 [2024-04-24 22:15:51.595363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.595542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.595570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.484 qpair failed and we were unable to recover it. 00:24:09.484 [2024-04-24 22:15:51.595743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.595880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.595907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.484 qpair failed and we were unable to recover it. 00:24:09.484 [2024-04-24 22:15:51.596075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.596262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.596289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.484 qpair failed and we were unable to recover it. 00:24:09.484 [2024-04-24 22:15:51.596441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.596577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.596604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.484 qpair failed and we were unable to recover it. 00:24:09.484 [2024-04-24 22:15:51.596782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.596970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.596996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.484 qpair failed and we were unable to recover it. 00:24:09.484 [2024-04-24 22:15:51.597211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.597421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.597449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.484 qpair failed and we were unable to recover it. 00:24:09.484 [2024-04-24 22:15:51.597644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.597838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.597865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.484 qpair failed and we were unable to recover it. 00:24:09.484 [2024-04-24 22:15:51.598027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.598220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.598246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.484 qpair failed and we were unable to recover it. 00:24:09.484 [2024-04-24 22:15:51.598386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.598563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.484 [2024-04-24 22:15:51.598590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.485 qpair failed and we were unable to recover it. 00:24:09.485 [2024-04-24 22:15:51.598736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.598928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.598955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.485 qpair failed and we were unable to recover it. 00:24:09.485 [2024-04-24 22:15:51.599134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.599276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.599303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.485 qpair failed and we were unable to recover it. 00:24:09.485 [2024-04-24 22:15:51.599443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.599646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.599674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.485 qpair failed and we were unable to recover it. 00:24:09.485 [2024-04-24 22:15:51.599869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.600071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.600098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.485 qpair failed and we were unable to recover it. 00:24:09.485 [2024-04-24 22:15:51.600302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.600485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.600514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.485 qpair failed and we were unable to recover it. 00:24:09.485 [2024-04-24 22:15:51.600702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.600854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.600881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.485 qpair failed and we were unable to recover it. 00:24:09.485 [2024-04-24 22:15:51.601090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.601258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.601285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.485 qpair failed and we were unable to recover it. 00:24:09.485 [2024-04-24 22:15:51.601432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.601618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.601646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.485 qpair failed and we were unable to recover it. 00:24:09.485 [2024-04-24 22:15:51.601801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.601967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.601993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.485 qpair failed and we were unable to recover it. 00:24:09.485 [2024-04-24 22:15:51.602199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.602408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.602441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.485 qpair failed and we were unable to recover it. 00:24:09.485 [2024-04-24 22:15:51.602627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.602839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.602865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.485 qpair failed and we were unable to recover it. 00:24:09.485 [2024-04-24 22:15:51.603031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.603242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.603268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.485 qpair failed and we were unable to recover it. 00:24:09.485 [2024-04-24 22:15:51.603486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.603672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.603699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.485 qpair failed and we were unable to recover it. 00:24:09.485 [2024-04-24 22:15:51.603912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.604094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.604120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.485 qpair failed and we were unable to recover it. 00:24:09.485 [2024-04-24 22:15:51.604305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.604495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.604523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.485 qpair failed and we were unable to recover it. 00:24:09.485 [2024-04-24 22:15:51.604698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.604864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.604891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.485 qpair failed and we were unable to recover it. 00:24:09.485 [2024-04-24 22:15:51.605086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.605251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.605278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.485 qpair failed and we were unable to recover it. 00:24:09.485 [2024-04-24 22:15:51.605473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.605653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.605680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.485 qpair failed and we were unable to recover it. 00:24:09.485 [2024-04-24 22:15:51.605881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.606048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.606075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.485 qpair failed and we were unable to recover it. 00:24:09.485 [2024-04-24 22:15:51.606219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.606387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.606422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.485 qpair failed and we were unable to recover it. 00:24:09.485 [2024-04-24 22:15:51.606619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.606783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.606810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.485 qpair failed and we were unable to recover it. 00:24:09.485 [2024-04-24 22:15:51.606974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.607163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.607190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.485 qpair failed and we were unable to recover it. 00:24:09.485 [2024-04-24 22:15:51.607372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.607545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.607573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.485 qpair failed and we were unable to recover it. 00:24:09.485 [2024-04-24 22:15:51.607778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.607938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.607966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.485 qpair failed and we were unable to recover it. 00:24:09.485 [2024-04-24 22:15:51.608121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.608345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.485 [2024-04-24 22:15:51.608372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.485 qpair failed and we were unable to recover it. 00:24:09.485 [2024-04-24 22:15:51.608562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.608718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.608745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.486 qpair failed and we were unable to recover it. 00:24:09.486 [2024-04-24 22:15:51.608963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.609129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.609156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.486 qpair failed and we were unable to recover it. 00:24:09.486 [2024-04-24 22:15:51.609319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.609500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.609529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.486 qpair failed and we were unable to recover it. 00:24:09.486 [2024-04-24 22:15:51.609727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.609900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.609927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.486 qpair failed and we were unable to recover it. 00:24:09.486 [2024-04-24 22:15:51.610137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.610296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.610323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.486 qpair failed and we were unable to recover it. 00:24:09.486 [2024-04-24 22:15:51.610487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.610710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.610737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.486 qpair failed and we were unable to recover it. 00:24:09.486 [2024-04-24 22:15:51.610949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.611122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.611149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.486 qpair failed and we were unable to recover it. 00:24:09.486 [2024-04-24 22:15:51.611307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.611451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.611478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.486 qpair failed and we were unable to recover it. 00:24:09.486 [2024-04-24 22:15:51.611692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.611862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.611889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.486 qpair failed and we were unable to recover it. 00:24:09.486 [2024-04-24 22:15:51.612051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.612180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.612207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.486 qpair failed and we were unable to recover it. 00:24:09.486 [2024-04-24 22:15:51.612406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.612572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.612599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.486 qpair failed and we were unable to recover it. 00:24:09.486 [2024-04-24 22:15:51.612744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.612906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.612933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.486 qpair failed and we were unable to recover it. 00:24:09.486 [2024-04-24 22:15:51.613113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.613302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.613328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.486 qpair failed and we were unable to recover it. 00:24:09.486 [2024-04-24 22:15:51.613498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.613669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.613697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.486 qpair failed and we were unable to recover it. 00:24:09.486 [2024-04-24 22:15:51.613906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.614078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.614105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.486 qpair failed and we were unable to recover it. 00:24:09.486 [2024-04-24 22:15:51.614289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.614443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.614475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.486 qpair failed and we were unable to recover it. 00:24:09.486 [2024-04-24 22:15:51.614656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.614807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.614841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.486 qpair failed and we were unable to recover it. 00:24:09.486 [2024-04-24 22:15:51.615024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.615160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.615187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.486 qpair failed and we were unable to recover it. 00:24:09.486 [2024-04-24 22:15:51.615387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.615562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.615589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.486 qpair failed and we were unable to recover it. 00:24:09.486 [2024-04-24 22:15:51.615723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.615929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.615956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.486 qpair failed and we were unable to recover it. 00:24:09.486 [2024-04-24 22:15:51.616158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.616360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.616387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.486 qpair failed and we were unable to recover it. 00:24:09.486 [2024-04-24 22:15:51.616560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.616704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.616731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.486 qpair failed and we were unable to recover it. 00:24:09.486 [2024-04-24 22:15:51.616935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.617093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.617120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.486 qpair failed and we were unable to recover it. 00:24:09.486 [2024-04-24 22:15:51.617336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.617472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.617501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.486 qpair failed and we were unable to recover it. 00:24:09.486 [2024-04-24 22:15:51.617685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.617873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.617900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.486 qpair failed and we were unable to recover it. 00:24:09.486 [2024-04-24 22:15:51.618108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.618271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.618298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.486 qpair failed and we were unable to recover it. 00:24:09.486 [2024-04-24 22:15:51.618482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.618653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.618680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.486 qpair failed and we were unable to recover it. 00:24:09.486 [2024-04-24 22:15:51.618898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.619032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.619059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.486 qpair failed and we were unable to recover it. 00:24:09.486 [2024-04-24 22:15:51.619270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.486 [2024-04-24 22:15:51.619466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.619495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.487 qpair failed and we were unable to recover it. 00:24:09.487 [2024-04-24 22:15:51.619647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.619813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.619840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.487 qpair failed and we were unable to recover it. 00:24:09.487 [2024-04-24 22:15:51.620004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.620199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.620226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.487 qpair failed and we were unable to recover it. 00:24:09.487 [2024-04-24 22:15:51.620402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.620547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.620574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.487 qpair failed and we were unable to recover it. 00:24:09.487 [2024-04-24 22:15:51.620713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.620888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.620915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.487 qpair failed and we were unable to recover it. 00:24:09.487 [2024-04-24 22:15:51.621084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.621248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.621275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.487 qpair failed and we were unable to recover it. 00:24:09.487 [2024-04-24 22:15:51.621475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.621673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.621700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.487 qpair failed and we were unable to recover it. 00:24:09.487 [2024-04-24 22:15:51.621872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.622055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.622082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.487 qpair failed and we were unable to recover it. 00:24:09.487 [2024-04-24 22:15:51.622210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.622406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.622434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.487 qpair failed and we were unable to recover it. 00:24:09.487 [2024-04-24 22:15:51.622634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.622831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.622858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.487 qpair failed and we were unable to recover it. 00:24:09.487 [2024-04-24 22:15:51.623065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.623218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.623245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.487 qpair failed and we were unable to recover it. 00:24:09.487 [2024-04-24 22:15:51.623455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.623645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.623672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.487 qpair failed and we were unable to recover it. 00:24:09.487 [2024-04-24 22:15:51.623835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.624018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.624045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.487 qpair failed and we were unable to recover it. 00:24:09.487 [2024-04-24 22:15:51.624257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.624391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.624426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.487 qpair failed and we were unable to recover it. 00:24:09.487 [2024-04-24 22:15:51.624611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.624764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.624791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.487 qpair failed and we were unable to recover it. 00:24:09.487 [2024-04-24 22:15:51.624957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.625124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.625151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.487 qpair failed and we were unable to recover it. 00:24:09.487 [2024-04-24 22:15:51.625330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.625494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.625523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.487 qpair failed and we were unable to recover it. 00:24:09.487 [2024-04-24 22:15:51.625712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.625878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.625905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.487 qpair failed and we were unable to recover it. 00:24:09.487 [2024-04-24 22:15:51.626045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.626212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.626239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.487 qpair failed and we were unable to recover it. 00:24:09.487 [2024-04-24 22:15:51.626406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.626537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.626564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.487 qpair failed and we were unable to recover it. 00:24:09.487 [2024-04-24 22:15:51.626735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.626876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.626903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.487 qpair failed and we were unable to recover it. 00:24:09.487 [2024-04-24 22:15:51.627072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.627277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.627304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.487 qpair failed and we were unable to recover it. 00:24:09.487 [2024-04-24 22:15:51.627547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.627710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.627737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.487 qpair failed and we were unable to recover it. 00:24:09.487 [2024-04-24 22:15:51.627929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.628099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.487 [2024-04-24 22:15:51.628126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.488 qpair failed and we were unable to recover it. 00:24:09.488 [2024-04-24 22:15:51.628303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.628482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.628510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.488 qpair failed and we were unable to recover it. 00:24:09.488 [2024-04-24 22:15:51.628666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.628846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.628873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.488 qpair failed and we were unable to recover it. 00:24:09.488 [2024-04-24 22:15:51.629041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.629169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.629196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.488 qpair failed and we were unable to recover it. 00:24:09.488 [2024-04-24 22:15:51.629408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.629586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.629612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.488 qpair failed and we were unable to recover it. 00:24:09.488 [2024-04-24 22:15:51.629786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.629986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.630013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.488 qpair failed and we were unable to recover it. 00:24:09.488 [2024-04-24 22:15:51.630193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.630356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.630383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.488 qpair failed and we were unable to recover it. 00:24:09.488 [2024-04-24 22:15:51.630566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.630738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.630770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.488 qpair failed and we were unable to recover it. 00:24:09.488 [2024-04-24 22:15:51.630972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.631124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.631151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.488 qpair failed and we were unable to recover it. 00:24:09.488 [2024-04-24 22:15:51.631301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.631478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.631506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.488 qpair failed and we were unable to recover it. 00:24:09.488 [2024-04-24 22:15:51.631704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.631854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.631881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.488 qpair failed and we were unable to recover it. 00:24:09.488 [2024-04-24 22:15:51.632087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.632289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.632315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.488 qpair failed and we were unable to recover it. 00:24:09.488 [2024-04-24 22:15:51.632471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.632642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.632670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.488 qpair failed and we were unable to recover it. 00:24:09.488 [2024-04-24 22:15:51.632870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.633054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.633081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.488 qpair failed and we were unable to recover it. 00:24:09.488 [2024-04-24 22:15:51.633272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.633434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.633463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.488 qpair failed and we were unable to recover it. 00:24:09.488 [2024-04-24 22:15:51.633667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.633796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.633830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.488 qpair failed and we were unable to recover it. 00:24:09.488 [2024-04-24 22:15:51.633965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.634145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.634172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.488 qpair failed and we were unable to recover it. 00:24:09.488 [2024-04-24 22:15:51.634323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.634512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.634545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.488 qpair failed and we were unable to recover it. 00:24:09.488 [2024-04-24 22:15:51.634669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.634839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.634865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.488 qpair failed and we were unable to recover it. 00:24:09.488 [2024-04-24 22:15:51.635050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.635179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.635206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.488 qpair failed and we were unable to recover it. 00:24:09.488 [2024-04-24 22:15:51.635401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.635546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.635573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.488 qpair failed and we were unable to recover it. 00:24:09.488 [2024-04-24 22:15:51.635719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.635898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.635925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.488 qpair failed and we were unable to recover it. 00:24:09.488 [2024-04-24 22:15:51.636060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.636223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.636250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.488 qpair failed and we were unable to recover it. 00:24:09.488 [2024-04-24 22:15:51.636425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.636602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.636630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.488 qpair failed and we were unable to recover it. 00:24:09.488 [2024-04-24 22:15:51.636795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.636997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.637024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.488 qpair failed and we were unable to recover it. 00:24:09.488 [2024-04-24 22:15:51.637236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.637407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.637435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.488 qpair failed and we were unable to recover it. 00:24:09.488 [2024-04-24 22:15:51.637634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.637775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.637802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.488 qpair failed and we were unable to recover it. 00:24:09.488 [2024-04-24 22:15:51.637959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.638143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.638175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.488 qpair failed and we were unable to recover it. 00:24:09.488 [2024-04-24 22:15:51.638336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.638459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.488 [2024-04-24 22:15:51.638487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.488 qpair failed and we were unable to recover it. 00:24:09.488 [2024-04-24 22:15:51.638702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.638892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.638918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.489 qpair failed and we were unable to recover it. 00:24:09.489 [2024-04-24 22:15:51.639074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.639247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.639274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.489 qpair failed and we were unable to recover it. 00:24:09.489 [2024-04-24 22:15:51.639445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.639611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.639638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.489 qpair failed and we were unable to recover it. 00:24:09.489 [2024-04-24 22:15:51.639806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.639984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.640010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.489 qpair failed and we were unable to recover it. 00:24:09.489 [2024-04-24 22:15:51.640179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.640363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.640390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.489 qpair failed and we were unable to recover it. 00:24:09.489 [2024-04-24 22:15:51.640606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.640766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.640793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.489 qpair failed and we were unable to recover it. 00:24:09.489 [2024-04-24 22:15:51.640921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.641120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.641147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.489 qpair failed and we were unable to recover it. 00:24:09.489 [2024-04-24 22:15:51.641326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.641509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.641538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.489 qpair failed and we were unable to recover it. 00:24:09.489 [2024-04-24 22:15:51.641693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.641861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.641893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.489 qpair failed and we were unable to recover it. 00:24:09.489 [2024-04-24 22:15:51.642066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.642236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.642263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.489 qpair failed and we were unable to recover it. 00:24:09.489 [2024-04-24 22:15:51.642471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.642643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.642670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.489 qpair failed and we were unable to recover it. 00:24:09.489 [2024-04-24 22:15:51.642839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.643005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.643032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.489 qpair failed and we were unable to recover it. 00:24:09.489 [2024-04-24 22:15:51.643227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.643425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.643453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.489 qpair failed and we were unable to recover it. 00:24:09.489 [2024-04-24 22:15:51.643615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.643836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.643862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.489 qpair failed and we were unable to recover it. 00:24:09.489 [2024-04-24 22:15:51.644011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.644182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.644209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.489 qpair failed and we were unable to recover it. 00:24:09.489 [2024-04-24 22:15:51.644407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.644573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.644600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.489 qpair failed and we were unable to recover it. 00:24:09.489 [2024-04-24 22:15:51.644744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.644905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.644932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.489 qpair failed and we were unable to recover it. 00:24:09.489 [2024-04-24 22:15:51.645082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.645250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.645277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.489 qpair failed and we were unable to recover it. 00:24:09.489 [2024-04-24 22:15:51.645465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.645661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.645688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.489 qpair failed and we were unable to recover it. 00:24:09.489 [2024-04-24 22:15:51.645881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.646018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.646045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.489 qpair failed and we were unable to recover it. 00:24:09.489 [2024-04-24 22:15:51.646173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.646350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.646376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.489 qpair failed and we were unable to recover it. 00:24:09.489 [2024-04-24 22:15:51.646582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.646731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.646758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.489 qpair failed and we were unable to recover it. 00:24:09.489 [2024-04-24 22:15:51.646926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.647120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.647147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.489 qpair failed and we were unable to recover it. 00:24:09.489 [2024-04-24 22:15:51.647358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.647511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.647539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.489 qpair failed and we were unable to recover it. 00:24:09.489 [2024-04-24 22:15:51.647666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.647872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.647899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.489 qpair failed and we were unable to recover it. 00:24:09.489 [2024-04-24 22:15:51.648065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.648238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.648264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.489 qpair failed and we were unable to recover it. 00:24:09.489 [2024-04-24 22:15:51.648473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.648641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.648668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.489 qpair failed and we were unable to recover it. 00:24:09.489 [2024-04-24 22:15:51.648865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.649070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.649096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.489 qpair failed and we were unable to recover it. 00:24:09.489 [2024-04-24 22:15:51.649278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.489 [2024-04-24 22:15:51.649479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.649508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.490 qpair failed and we were unable to recover it. 00:24:09.490 [2024-04-24 22:15:51.649688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.649865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.649892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.490 qpair failed and we were unable to recover it. 00:24:09.490 [2024-04-24 22:15:51.650079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.650240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.650267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.490 qpair failed and we were unable to recover it. 00:24:09.490 [2024-04-24 22:15:51.650478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.650663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.650690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.490 qpair failed and we were unable to recover it. 00:24:09.490 [2024-04-24 22:15:51.650813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.650974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.651001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.490 qpair failed and we were unable to recover it. 00:24:09.490 [2024-04-24 22:15:51.651171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.651351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.651378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.490 qpair failed and we were unable to recover it. 00:24:09.490 [2024-04-24 22:15:51.651562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.651694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.651722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.490 qpair failed and we were unable to recover it. 00:24:09.490 [2024-04-24 22:15:51.651864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.652029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.652056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.490 qpair failed and we were unable to recover it. 00:24:09.490 [2024-04-24 22:15:51.652264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.652441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.652469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.490 qpair failed and we were unable to recover it. 00:24:09.490 [2024-04-24 22:15:51.652647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.652840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.652867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.490 qpair failed and we were unable to recover it. 00:24:09.490 [2024-04-24 22:15:51.653041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.653182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.653209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.490 qpair failed and we were unable to recover it. 00:24:09.490 [2024-04-24 22:15:51.653387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.653567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.653594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.490 qpair failed and we were unable to recover it. 00:24:09.490 [2024-04-24 22:15:51.653758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.653922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.653949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.490 qpair failed and we were unable to recover it. 00:24:09.490 [2024-04-24 22:15:51.654148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.654283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.654314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.490 qpair failed and we were unable to recover it. 00:24:09.490 [2024-04-24 22:15:51.654496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.654679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.654706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.490 qpair failed and we were unable to recover it. 00:24:09.490 [2024-04-24 22:15:51.654899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.655027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.655054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.490 qpair failed and we were unable to recover it. 00:24:09.490 [2024-04-24 22:15:51.655217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.655376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.655410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.490 qpair failed and we were unable to recover it. 00:24:09.490 [2024-04-24 22:15:51.655618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.655762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.655789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.490 qpair failed and we were unable to recover it. 00:24:09.490 [2024-04-24 22:15:51.655979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.656149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.656176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.490 qpair failed and we were unable to recover it. 00:24:09.490 [2024-04-24 22:15:51.656402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.656565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.656591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.490 qpair failed and we were unable to recover it. 00:24:09.490 [2024-04-24 22:15:51.656805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.656992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.657018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.490 qpair failed and we were unable to recover it. 00:24:09.490 [2024-04-24 22:15:51.657226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.657354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.657381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.490 qpair failed and we were unable to recover it. 00:24:09.490 [2024-04-24 22:15:51.657609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.657789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.657815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.490 qpair failed and we were unable to recover it. 00:24:09.490 [2024-04-24 22:15:51.657989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.658130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.658156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.490 qpair failed and we were unable to recover it. 00:24:09.490 [2024-04-24 22:15:51.658329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.658530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.658559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.490 qpair failed and we were unable to recover it. 00:24:09.490 [2024-04-24 22:15:51.658695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.658898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.658924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.490 qpair failed and we were unable to recover it. 00:24:09.490 [2024-04-24 22:15:51.659082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.659240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.659267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.490 qpair failed and we were unable to recover it. 00:24:09.490 [2024-04-24 22:15:51.659477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.659680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.659707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.490 qpair failed and we were unable to recover it. 00:24:09.490 [2024-04-24 22:15:51.659898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.660063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.490 [2024-04-24 22:15:51.660090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.490 qpair failed and we were unable to recover it. 00:24:09.490 [2024-04-24 22:15:51.660258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.660425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.660453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.491 qpair failed and we were unable to recover it. 00:24:09.491 [2024-04-24 22:15:51.660584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.660725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.660751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.491 qpair failed and we were unable to recover it. 00:24:09.491 [2024-04-24 22:15:51.660965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.661156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.661182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.491 qpair failed and we were unable to recover it. 00:24:09.491 [2024-04-24 22:15:51.661319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.661483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.661511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.491 qpair failed and we were unable to recover it. 00:24:09.491 [2024-04-24 22:15:51.661691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.661866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.661893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.491 qpair failed and we were unable to recover it. 00:24:09.491 [2024-04-24 22:15:51.662056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.662270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.662296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.491 qpair failed and we were unable to recover it. 00:24:09.491 [2024-04-24 22:15:51.662459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.662616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.662643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.491 qpair failed and we were unable to recover it. 00:24:09.491 [2024-04-24 22:15:51.662817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.662968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.662995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.491 qpair failed and we were unable to recover it. 00:24:09.491 [2024-04-24 22:15:51.663166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.663332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.663359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.491 qpair failed and we were unable to recover it. 00:24:09.491 [2024-04-24 22:15:51.663537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.663700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.663726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.491 qpair failed and we were unable to recover it. 00:24:09.491 [2024-04-24 22:15:51.663931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.664106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.664132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.491 qpair failed and we were unable to recover it. 00:24:09.491 [2024-04-24 22:15:51.664329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.664510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.664538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.491 qpair failed and we were unable to recover it. 00:24:09.491 [2024-04-24 22:15:51.664739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.664910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.664936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.491 qpair failed and we were unable to recover it. 00:24:09.491 [2024-04-24 22:15:51.665127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.665303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.665329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.491 qpair failed and we were unable to recover it. 00:24:09.491 [2024-04-24 22:15:51.665535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.665707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.665734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.491 qpair failed and we were unable to recover it. 00:24:09.491 [2024-04-24 22:15:51.665909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.666096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.666123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.491 qpair failed and we were unable to recover it. 00:24:09.491 [2024-04-24 22:15:51.666326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.666493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.666521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.491 qpair failed and we were unable to recover it. 00:24:09.491 [2024-04-24 22:15:51.666685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.666852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.666879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.491 qpair failed and we were unable to recover it. 00:24:09.491 [2024-04-24 22:15:51.667089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.667251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.667278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.491 qpair failed and we were unable to recover it. 00:24:09.491 [2024-04-24 22:15:51.667497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.667702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.667729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.491 qpair failed and we were unable to recover it. 00:24:09.491 [2024-04-24 22:15:51.667945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.668115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.668142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.491 qpair failed and we were unable to recover it. 00:24:09.491 [2024-04-24 22:15:51.668351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.668521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.668549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.491 qpair failed and we were unable to recover it. 00:24:09.491 [2024-04-24 22:15:51.668807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.669096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.491 [2024-04-24 22:15:51.669123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.491 qpair failed and we were unable to recover it. 00:24:09.492 [2024-04-24 22:15:51.669337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.669504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.669532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.492 qpair failed and we were unable to recover it. 00:24:09.492 [2024-04-24 22:15:51.669735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.669898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.669925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.492 qpair failed and we were unable to recover it. 00:24:09.492 [2024-04-24 22:15:51.670093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.670266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.670293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.492 qpair failed and we were unable to recover it. 00:24:09.492 [2024-04-24 22:15:51.670465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.670602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.670629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.492 qpair failed and we were unable to recover it. 00:24:09.492 [2024-04-24 22:15:51.670792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.671007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.671034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.492 qpair failed and we were unable to recover it. 00:24:09.492 [2024-04-24 22:15:51.671257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.671467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.671495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.492 qpair failed and we were unable to recover it. 00:24:09.492 [2024-04-24 22:15:51.671650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.671849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.671876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.492 qpair failed and we were unable to recover it. 00:24:09.492 [2024-04-24 22:15:51.672043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.672253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.672280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.492 qpair failed and we were unable to recover it. 00:24:09.492 [2024-04-24 22:15:51.672453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.672631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.672658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.492 qpair failed and we were unable to recover it. 00:24:09.492 [2024-04-24 22:15:51.672866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.673074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.673101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.492 qpair failed and we were unable to recover it. 00:24:09.492 [2024-04-24 22:15:51.673337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.673519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.673547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.492 qpair failed and we were unable to recover it. 00:24:09.492 [2024-04-24 22:15:51.673752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.673913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.673940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.492 qpair failed and we were unable to recover it. 00:24:09.492 [2024-04-24 22:15:51.674101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.674288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.674315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.492 qpair failed and we were unable to recover it. 00:24:09.492 [2024-04-24 22:15:51.674466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.674633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.674660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.492 qpair failed and we were unable to recover it. 00:24:09.492 [2024-04-24 22:15:51.674876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.675065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.675092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.492 qpair failed and we were unable to recover it. 00:24:09.492 [2024-04-24 22:15:51.675290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.675468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.675496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.492 qpair failed and we were unable to recover it. 00:24:09.492 [2024-04-24 22:15:51.675660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.675842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.675869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.492 qpair failed and we were unable to recover it. 00:24:09.492 [2024-04-24 22:15:51.676028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.676191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.676218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.492 qpair failed and we were unable to recover it. 00:24:09.492 [2024-04-24 22:15:51.676400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.676573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.676599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.492 qpair failed and we were unable to recover it. 00:24:09.492 [2024-04-24 22:15:51.676797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.676963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.676991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.492 qpair failed and we were unable to recover it. 00:24:09.492 [2024-04-24 22:15:51.677161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.677329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.677356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.492 qpair failed and we were unable to recover it. 00:24:09.492 [2024-04-24 22:15:51.677561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.677724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.677750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.492 qpair failed and we were unable to recover it. 00:24:09.492 [2024-04-24 22:15:51.677963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.678122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.678148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.492 qpair failed and we were unable to recover it. 00:24:09.492 [2024-04-24 22:15:51.678312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.678481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.678509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.492 qpair failed and we were unable to recover it. 00:24:09.492 [2024-04-24 22:15:51.678726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.678878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.678905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.492 qpair failed and we were unable to recover it. 00:24:09.492 [2024-04-24 22:15:51.679059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.679265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.679292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.492 qpair failed and we were unable to recover it. 00:24:09.492 [2024-04-24 22:15:51.679511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.679696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.679723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.492 qpair failed and we were unable to recover it. 00:24:09.492 [2024-04-24 22:15:51.679921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.680107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.680133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.492 qpair failed and we were unable to recover it. 00:24:09.492 [2024-04-24 22:15:51.680328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.492 [2024-04-24 22:15:51.680539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.680566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.680720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.680931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.680958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.681142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.681311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.681338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.681510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.681642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.681669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.681852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.682044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.682071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.682274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.682469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.682498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.682713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.682896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.682922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.683096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.683296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.683322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.683506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.683685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.683711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.683917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.684096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.684123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.684282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.684420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.684448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.684636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.684849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.684877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.685072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.685234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.685260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.685485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.685650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.685677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.685871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.686077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.686103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.686254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.686450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.686478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.686647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.686808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.686835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.687003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.687203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.687230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.687449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.687627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.687653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.687828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.688036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.688063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.688245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.688416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.688443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.688637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.688820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.688848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.689021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.689232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.689259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.689444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.689617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.689644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.689846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.690004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.690030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.690239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.690443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.690472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.690619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.690802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.690829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.691010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.691204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.691231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.691435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.691631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.691658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.691881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.692099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.692126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.692335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.692514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.692542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.493 qpair failed and we were unable to recover it. 00:24:09.493 [2024-04-24 22:15:51.692714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.493 [2024-04-24 22:15:51.692929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.692961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.693203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.693410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.693439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.693621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.693787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.693814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.694006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.694203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.694230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.694434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.694596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.694622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.694821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.694993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.695019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.695180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.695368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.695402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.695580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.695708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.695736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.695872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.696039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.696066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.696264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.696435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.696463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.696626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.696798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.696830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.696965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.697182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.697210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.697401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.697607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.697634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.697831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.698007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.698034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.698246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.698418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.698447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.698643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.698812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.698839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.699034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.699228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.699254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.699432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.699605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.699632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.699781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.699953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.699980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.700187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.700434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.700463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.700637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.700808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.700840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.701017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.701193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.701220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.701426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.701623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.701651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.701828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.701990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.702018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.702216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.702389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.702426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.702615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.702817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.702844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.703030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.703248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.703275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.703428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.703643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.703670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.703845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.703993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.704020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.704192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.704413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.704441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.704576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.704777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.494 [2024-04-24 22:15:51.704809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.494 qpair failed and we were unable to recover it. 00:24:09.494 [2024-04-24 22:15:51.704999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.495 [2024-04-24 22:15:51.705207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.495 [2024-04-24 22:15:51.705234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.495 qpair failed and we were unable to recover it. 00:24:09.495 [2024-04-24 22:15:51.705425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.495 [2024-04-24 22:15:51.705667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.495 [2024-04-24 22:15:51.705694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.495 qpair failed and we were unable to recover it. 00:24:09.495 [2024-04-24 22:15:51.705964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.495 [2024-04-24 22:15:51.706160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.495 [2024-04-24 22:15:51.706187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.495 qpair failed and we were unable to recover it. 00:24:09.495 [2024-04-24 22:15:51.706415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.495 [2024-04-24 22:15:51.706589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.495 [2024-04-24 22:15:51.706616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.495 qpair failed and we were unable to recover it. 00:24:09.495 [2024-04-24 22:15:51.706778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.495 [2024-04-24 22:15:51.706956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.495 [2024-04-24 22:15:51.706983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.495 qpair failed and we were unable to recover it. 00:24:09.495 [2024-04-24 22:15:51.707180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.495 [2024-04-24 22:15:51.707360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.495 [2024-04-24 22:15:51.707387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.495 qpair failed and we were unable to recover it. 00:24:09.495 [2024-04-24 22:15:51.707580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.495 [2024-04-24 22:15:51.707752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.495 [2024-04-24 22:15:51.707779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.495 qpair failed and we were unable to recover it. 00:24:09.495 [2024-04-24 22:15:51.707951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.495 [2024-04-24 22:15:51.708145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.495 [2024-04-24 22:15:51.708172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.495 qpair failed and we were unable to recover it. 00:24:09.495 [2024-04-24 22:15:51.708320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.495 [2024-04-24 22:15:51.708503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.495 [2024-04-24 22:15:51.708531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.495 qpair failed and we were unable to recover it. 00:24:09.495 [2024-04-24 22:15:51.708708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.495 [2024-04-24 22:15:51.708911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.495 [2024-04-24 22:15:51.708937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.495 qpair failed and we were unable to recover it. 00:24:09.495 [2024-04-24 22:15:51.709141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.495 [2024-04-24 22:15:51.709302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.495 [2024-04-24 22:15:51.709329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.495 qpair failed and we were unable to recover it. 00:24:09.495 [2024-04-24 22:15:51.709520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.495 [2024-04-24 22:15:51.709678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.495 [2024-04-24 22:15:51.709705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.495 qpair failed and we were unable to recover it. 00:24:09.495 [2024-04-24 22:15:51.709857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.495 [2024-04-24 22:15:51.710011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.495 [2024-04-24 22:15:51.710038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.495 qpair failed and we were unable to recover it. 00:24:09.768 [2024-04-24 22:15:51.710223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-04-24 22:15:51.710357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-04-24 22:15:51.710384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.768 qpair failed and we were unable to recover it. 00:24:09.768 [2024-04-24 22:15:51.710592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-04-24 22:15:51.710788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-04-24 22:15:51.710814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.768 qpair failed and we were unable to recover it. 00:24:09.768 [2024-04-24 22:15:51.711019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-04-24 22:15:51.711188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-04-24 22:15:51.711214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.768 qpair failed and we were unable to recover it. 00:24:09.768 [2024-04-24 22:15:51.711356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-04-24 22:15:51.711560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-04-24 22:15:51.711588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.768 qpair failed and we were unable to recover it. 00:24:09.768 [2024-04-24 22:15:51.711787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-04-24 22:15:51.711984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-04-24 22:15:51.712011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.768 qpair failed and we were unable to recover it. 00:24:09.768 [2024-04-24 22:15:51.712243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-04-24 22:15:51.712426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-04-24 22:15:51.712454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.768 qpair failed and we were unable to recover it. 00:24:09.768 [2024-04-24 22:15:51.712640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-04-24 22:15:51.712851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-04-24 22:15:51.712877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.768 qpair failed and we were unable to recover it. 00:24:09.768 [2024-04-24 22:15:51.713065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-04-24 22:15:51.713235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-04-24 22:15:51.713261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.768 qpair failed and we were unable to recover it. 00:24:09.768 [2024-04-24 22:15:51.713462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-04-24 22:15:51.713649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-04-24 22:15:51.713676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.768 qpair failed and we were unable to recover it. 00:24:09.768 [2024-04-24 22:15:51.713864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-04-24 22:15:51.714074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-04-24 22:15:51.714101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.768 qpair failed and we were unable to recover it. 00:24:09.768 [2024-04-24 22:15:51.714296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-04-24 22:15:51.714467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-04-24 22:15:51.714495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.768 qpair failed and we were unable to recover it. 00:24:09.768 [2024-04-24 22:15:51.714661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-04-24 22:15:51.714831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-04-24 22:15:51.714857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.768 qpair failed and we were unable to recover it. 00:24:09.768 [2024-04-24 22:15:51.715029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.768 [2024-04-24 22:15:51.715224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.715251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.769 qpair failed and we were unable to recover it. 00:24:09.769 [2024-04-24 22:15:51.715453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.715592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.715618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.769 qpair failed and we were unable to recover it. 00:24:09.769 [2024-04-24 22:15:51.715801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.715962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.715988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.769 qpair failed and we were unable to recover it. 00:24:09.769 [2024-04-24 22:15:51.716201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.716374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.716410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.769 qpair failed and we were unable to recover it. 00:24:09.769 [2024-04-24 22:15:51.716627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.716828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.716855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.769 qpair failed and we were unable to recover it. 00:24:09.769 [2024-04-24 22:15:51.717060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.717232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.717259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.769 qpair failed and we were unable to recover it. 00:24:09.769 [2024-04-24 22:15:51.717411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.717576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.717603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.769 qpair failed and we were unable to recover it. 00:24:09.769 [2024-04-24 22:15:51.717813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.717961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.717988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.769 qpair failed and we were unable to recover it. 00:24:09.769 [2024-04-24 22:15:51.718165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.718361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.718388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.769 qpair failed and we were unable to recover it. 00:24:09.769 [2024-04-24 22:15:51.718668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.718873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.718900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.769 qpair failed and we were unable to recover it. 00:24:09.769 [2024-04-24 22:15:51.719082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.719276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.719303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.769 qpair failed and we were unable to recover it. 00:24:09.769 [2024-04-24 22:15:51.719470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.719640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.719667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.769 qpair failed and we were unable to recover it. 00:24:09.769 [2024-04-24 22:15:51.719862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.720031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.720057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.769 qpair failed and we were unable to recover it. 00:24:09.769 [2024-04-24 22:15:51.720264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.720438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.720466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.769 qpair failed and we were unable to recover it. 00:24:09.769 [2024-04-24 22:15:51.720625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.720796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.720823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.769 qpair failed and we were unable to recover it. 00:24:09.769 [2024-04-24 22:15:51.721015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.721195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.721222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.769 qpair failed and we were unable to recover it. 00:24:09.769 [2024-04-24 22:15:51.721354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.721493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.721521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.769 qpair failed and we were unable to recover it. 00:24:09.769 [2024-04-24 22:15:51.721730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.721915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.721941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.769 qpair failed and we were unable to recover it. 00:24:09.769 [2024-04-24 22:15:51.722108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.722275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.722302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.769 qpair failed and we were unable to recover it. 00:24:09.769 [2024-04-24 22:15:51.722509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.722714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.722741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.769 qpair failed and we were unable to recover it. 00:24:09.769 [2024-04-24 22:15:51.722950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.723158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.723185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.769 qpair failed and we were unable to recover it. 00:24:09.769 [2024-04-24 22:15:51.723382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.723564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.723591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.769 qpair failed and we were unable to recover it. 00:24:09.769 [2024-04-24 22:15:51.723796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.723973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.723999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.769 qpair failed and we were unable to recover it. 00:24:09.769 [2024-04-24 22:15:51.724208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.724344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.724371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.769 qpair failed and we were unable to recover it. 00:24:09.769 [2024-04-24 22:15:51.724504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.724662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.724689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.769 qpair failed and we were unable to recover it. 00:24:09.769 [2024-04-24 22:15:51.724903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.725085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.725111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.769 qpair failed and we were unable to recover it. 00:24:09.769 [2024-04-24 22:15:51.725310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.725476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.725505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.769 qpair failed and we were unable to recover it. 00:24:09.769 [2024-04-24 22:15:51.725700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.725895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.725922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.769 qpair failed and we were unable to recover it. 00:24:09.769 [2024-04-24 22:15:51.726054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.726257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.769 [2024-04-24 22:15:51.726284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.770 qpair failed and we were unable to recover it. 00:24:09.770 [2024-04-24 22:15:51.726424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.726573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.726600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.770 qpair failed and we were unable to recover it. 00:24:09.770 [2024-04-24 22:15:51.726814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.726984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.727012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.770 qpair failed and we were unable to recover it. 00:24:09.770 [2024-04-24 22:15:51.727222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.727392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.727426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.770 qpair failed and we were unable to recover it. 00:24:09.770 [2024-04-24 22:15:51.727588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.727791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.727818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.770 qpair failed and we were unable to recover it. 00:24:09.770 [2024-04-24 22:15:51.728014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.728218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.728245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.770 qpair failed and we were unable to recover it. 00:24:09.770 [2024-04-24 22:15:51.728413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.728587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.728614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.770 qpair failed and we were unable to recover it. 00:24:09.770 [2024-04-24 22:15:51.728752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.728932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.728959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.770 qpair failed and we were unable to recover it. 00:24:09.770 [2024-04-24 22:15:51.729141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.729305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.729332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.770 qpair failed and we were unable to recover it. 00:24:09.770 [2024-04-24 22:15:51.729528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.729736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.729772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.770 qpair failed and we were unable to recover it. 00:24:09.770 [2024-04-24 22:15:51.730071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.730344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.730371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.770 qpair failed and we were unable to recover it. 00:24:09.770 [2024-04-24 22:15:51.730551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.730760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.730787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.770 qpair failed and we were unable to recover it. 00:24:09.770 [2024-04-24 22:15:51.730958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.731177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.731204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.770 qpair failed and we were unable to recover it. 00:24:09.770 [2024-04-24 22:15:51.731415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.731600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.731628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.770 qpair failed and we were unable to recover it. 00:24:09.770 [2024-04-24 22:15:51.731803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.731978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.732005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.770 qpair failed and we were unable to recover it. 00:24:09.770 [2024-04-24 22:15:51.732199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.732401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.732429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.770 qpair failed and we were unable to recover it. 00:24:09.770 [2024-04-24 22:15:51.732619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.732799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.732826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.770 qpair failed and we were unable to recover it. 00:24:09.770 [2024-04-24 22:15:51.733016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.733160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.733186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.770 qpair failed and we were unable to recover it. 00:24:09.770 [2024-04-24 22:15:51.733413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.733604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.733632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.770 qpair failed and we were unable to recover it. 00:24:09.770 [2024-04-24 22:15:51.733807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.734007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.734034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.770 qpair failed and we were unable to recover it. 00:24:09.770 [2024-04-24 22:15:51.734206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.734371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.734406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.770 qpair failed and we were unable to recover it. 00:24:09.770 [2024-04-24 22:15:51.734567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.734743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.734769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.770 qpair failed and we were unable to recover it. 00:24:09.770 [2024-04-24 22:15:51.734951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.735131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.735158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.770 qpair failed and we were unable to recover it. 00:24:09.770 [2024-04-24 22:15:51.735325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.735525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.735554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.770 qpair failed and we were unable to recover it. 00:24:09.770 [2024-04-24 22:15:51.735718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.735923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.735950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.770 qpair failed and we were unable to recover it. 00:24:09.770 [2024-04-24 22:15:51.736095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.736307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.736334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.770 qpair failed and we were unable to recover it. 00:24:09.770 [2024-04-24 22:15:51.736516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.736712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.736739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.770 qpair failed and we were unable to recover it. 00:24:09.770 [2024-04-24 22:15:51.736932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.737115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.737142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.770 qpair failed and we were unable to recover it. 00:24:09.770 [2024-04-24 22:15:51.737351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.737546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.770 [2024-04-24 22:15:51.737574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.771 qpair failed and we were unable to recover it. 00:24:09.771 [2024-04-24 22:15:51.737747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.737920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.737947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.771 qpair failed and we were unable to recover it. 00:24:09.771 [2024-04-24 22:15:51.738122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.738312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.738338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.771 qpair failed and we were unable to recover it. 00:24:09.771 [2024-04-24 22:15:51.738529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.738689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.738716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.771 qpair failed and we were unable to recover it. 00:24:09.771 [2024-04-24 22:15:51.738862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.739045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.739072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.771 qpair failed and we were unable to recover it. 00:24:09.771 [2024-04-24 22:15:51.739250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.739454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.739482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.771 qpair failed and we were unable to recover it. 00:24:09.771 [2024-04-24 22:15:51.739721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.739892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.739919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.771 qpair failed and we were unable to recover it. 00:24:09.771 [2024-04-24 22:15:51.740118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.740289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.740316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.771 qpair failed and we were unable to recover it. 00:24:09.771 [2024-04-24 22:15:51.740491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.740664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.740691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.771 qpair failed and we were unable to recover it. 00:24:09.771 [2024-04-24 22:15:51.740869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.741046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.741073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.771 qpair failed and we were unable to recover it. 00:24:09.771 [2024-04-24 22:15:51.741271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.741440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.741468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.771 qpair failed and we were unable to recover it. 00:24:09.771 [2024-04-24 22:15:51.741639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.741769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.741796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.771 qpair failed and we were unable to recover it. 00:24:09.771 [2024-04-24 22:15:51.741967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.742102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.742128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.771 qpair failed and we were unable to recover it. 00:24:09.771 [2024-04-24 22:15:51.742272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.742474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.742501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.771 qpair failed and we were unable to recover it. 00:24:09.771 [2024-04-24 22:15:51.742696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.742870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.742896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.771 qpair failed and we were unable to recover it. 00:24:09.771 [2024-04-24 22:15:51.743105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.743253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.743280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.771 qpair failed and we were unable to recover it. 00:24:09.771 [2024-04-24 22:15:51.743490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.743662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.743689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.771 qpair failed and we were unable to recover it. 00:24:09.771 [2024-04-24 22:15:51.743886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.744057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.744085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.771 qpair failed and we were unable to recover it. 00:24:09.771 [2024-04-24 22:15:51.744272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.744440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.744469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.771 qpair failed and we were unable to recover it. 00:24:09.771 [2024-04-24 22:15:51.744645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.744846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.744874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.771 qpair failed and we were unable to recover it. 00:24:09.771 [2024-04-24 22:15:51.745122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.745319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.745346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.771 qpair failed and we were unable to recover it. 00:24:09.771 [2024-04-24 22:15:51.745552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.745749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.745776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.771 qpair failed and we were unable to recover it. 00:24:09.771 [2024-04-24 22:15:51.745975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.746146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.746173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.771 qpair failed and we were unable to recover it. 00:24:09.771 [2024-04-24 22:15:51.746342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.746506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.746534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.771 qpair failed and we were unable to recover it. 00:24:09.771 [2024-04-24 22:15:51.746706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.746901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.746928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.771 qpair failed and we were unable to recover it. 00:24:09.771 [2024-04-24 22:15:51.747130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.747298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.747325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.771 qpair failed and we were unable to recover it. 00:24:09.771 [2024-04-24 22:15:51.747494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.747683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.747709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.771 qpair failed and we were unable to recover it. 00:24:09.771 [2024-04-24 22:15:51.747906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.748102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.748129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.771 qpair failed and we were unable to recover it. 00:24:09.771 [2024-04-24 22:15:51.748332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.748526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.771 [2024-04-24 22:15:51.748554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.772 qpair failed and we were unable to recover it. 00:24:09.772 [2024-04-24 22:15:51.748758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.748963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.748991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.772 qpair failed and we were unable to recover it. 00:24:09.772 [2024-04-24 22:15:51.749187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.749392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.749432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.772 qpair failed and we were unable to recover it. 00:24:09.772 [2024-04-24 22:15:51.749650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.749838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.749866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.772 qpair failed and we were unable to recover it. 00:24:09.772 [2024-04-24 22:15:51.750046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.750235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.750261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.772 qpair failed and we were unable to recover it. 00:24:09.772 [2024-04-24 22:15:51.750467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.750654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.750681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.772 qpair failed and we were unable to recover it. 00:24:09.772 [2024-04-24 22:15:51.750888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.751042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.751070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.772 qpair failed and we were unable to recover it. 00:24:09.772 [2024-04-24 22:15:51.751221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.751408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.751436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.772 qpair failed and we were unable to recover it. 00:24:09.772 [2024-04-24 22:15:51.751620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.751822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.751849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.772 qpair failed and we were unable to recover it. 00:24:09.772 [2024-04-24 22:15:51.752072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.752277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.752304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.772 qpair failed and we were unable to recover it. 00:24:09.772 [2024-04-24 22:15:51.752470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.752645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.752671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.772 qpair failed and we were unable to recover it. 00:24:09.772 [2024-04-24 22:15:51.752833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.753007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.753034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.772 qpair failed and we were unable to recover it. 00:24:09.772 [2024-04-24 22:15:51.753233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.753439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.753467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.772 qpair failed and we were unable to recover it. 00:24:09.772 [2024-04-24 22:15:51.753674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.753858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.753885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.772 qpair failed and we were unable to recover it. 00:24:09.772 [2024-04-24 22:15:51.754047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.754226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.754253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.772 qpair failed and we were unable to recover it. 00:24:09.772 [2024-04-24 22:15:51.754413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.754570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.754597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.772 qpair failed and we were unable to recover it. 00:24:09.772 [2024-04-24 22:15:51.754771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.754970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.754997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.772 qpair failed and we were unable to recover it. 00:24:09.772 [2024-04-24 22:15:51.755174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.755355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.755382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.772 qpair failed and we were unable to recover it. 00:24:09.772 [2024-04-24 22:15:51.755577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.755772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.755799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.772 qpair failed and we were unable to recover it. 00:24:09.772 [2024-04-24 22:15:51.755979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.756162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.756188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.772 qpair failed and we were unable to recover it. 00:24:09.772 [2024-04-24 22:15:51.756319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.756455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.756483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.772 qpair failed and we were unable to recover it. 00:24:09.772 [2024-04-24 22:15:51.756701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.756868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.756899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.772 qpair failed and we were unable to recover it. 00:24:09.772 [2024-04-24 22:15:51.757077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.757222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.757249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.772 qpair failed and we were unable to recover it. 00:24:09.772 [2024-04-24 22:15:51.757424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.757624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.757651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.772 qpair failed and we were unable to recover it. 00:24:09.772 [2024-04-24 22:15:51.757865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.758069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.758097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.772 qpair failed and we were unable to recover it. 00:24:09.772 [2024-04-24 22:15:51.758308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.758489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.758516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.772 qpair failed and we were unable to recover it. 00:24:09.772 [2024-04-24 22:15:51.758726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.758888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.758914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.772 qpair failed and we were unable to recover it. 00:24:09.772 [2024-04-24 22:15:51.759072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.759280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.759307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.772 qpair failed and we were unable to recover it. 00:24:09.772 [2024-04-24 22:15:51.759505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.759675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.772 [2024-04-24 22:15:51.759702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.772 qpair failed and we were unable to recover it. 00:24:09.773 [2024-04-24 22:15:51.759870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.760034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.760061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.773 qpair failed and we were unable to recover it. 00:24:09.773 [2024-04-24 22:15:51.760236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.760392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.760435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.773 qpair failed and we were unable to recover it. 00:24:09.773 [2024-04-24 22:15:51.760632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.760793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.760825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.773 qpair failed and we were unable to recover it. 00:24:09.773 [2024-04-24 22:15:51.760987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.761161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.761188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.773 qpair failed and we were unable to recover it. 00:24:09.773 [2024-04-24 22:15:51.761413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.761596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.761623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.773 qpair failed and we were unable to recover it. 00:24:09.773 [2024-04-24 22:15:51.761816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.761969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.761995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.773 qpair failed and we were unable to recover it. 00:24:09.773 [2024-04-24 22:15:51.762219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.762420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.762449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.773 qpair failed and we were unable to recover it. 00:24:09.773 [2024-04-24 22:15:51.762621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.762812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.762839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.773 qpair failed and we were unable to recover it. 00:24:09.773 [2024-04-24 22:15:51.763052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.763231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.763257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.773 qpair failed and we were unable to recover it. 00:24:09.773 [2024-04-24 22:15:51.763445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.763637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.763664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.773 qpair failed and we were unable to recover it. 00:24:09.773 [2024-04-24 22:15:51.763832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.764009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.764036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.773 qpair failed and we were unable to recover it. 00:24:09.773 [2024-04-24 22:15:51.764230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.764479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.764507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.773 qpair failed and we were unable to recover it. 00:24:09.773 [2024-04-24 22:15:51.764677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.764916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.764947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.773 qpair failed and we were unable to recover it. 00:24:09.773 [2024-04-24 22:15:51.765148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.765310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.765337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.773 qpair failed and we were unable to recover it. 00:24:09.773 [2024-04-24 22:15:51.765537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.765707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.765734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.773 qpair failed and we were unable to recover it. 00:24:09.773 [2024-04-24 22:15:51.765917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.766070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.766097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.773 qpair failed and we were unable to recover it. 00:24:09.773 [2024-04-24 22:15:51.766264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.766475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.766503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.773 qpair failed and we were unable to recover it. 00:24:09.773 [2024-04-24 22:15:51.766672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.766888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.766915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.773 qpair failed and we were unable to recover it. 00:24:09.773 [2024-04-24 22:15:51.767050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.767194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.767221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.773 qpair failed and we were unable to recover it. 00:24:09.773 [2024-04-24 22:15:51.767423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.767624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.767651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.773 qpair failed and we were unable to recover it. 00:24:09.773 [2024-04-24 22:15:51.767831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.768053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.768080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.773 qpair failed and we were unable to recover it. 00:24:09.773 [2024-04-24 22:15:51.768268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.768441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.773 [2024-04-24 22:15:51.768469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.773 qpair failed and we were unable to recover it. 00:24:09.773 [2024-04-24 22:15:51.768649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.768813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.768845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.774 qpair failed and we were unable to recover it. 00:24:09.774 [2024-04-24 22:15:51.769021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.769223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.769250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.774 qpair failed and we were unable to recover it. 00:24:09.774 [2024-04-24 22:15:51.769425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.769619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.769646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.774 qpair failed and we were unable to recover it. 00:24:09.774 [2024-04-24 22:15:51.769847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.769982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.770009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.774 qpair failed and we were unable to recover it. 00:24:09.774 [2024-04-24 22:15:51.770177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.770372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.770404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.774 qpair failed and we were unable to recover it. 00:24:09.774 [2024-04-24 22:15:51.770662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.770805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.770832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.774 qpair failed and we were unable to recover it. 00:24:09.774 [2024-04-24 22:15:51.771027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.771269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.771296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.774 qpair failed and we were unable to recover it. 00:24:09.774 [2024-04-24 22:15:51.771490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.771625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.771652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.774 qpair failed and we were unable to recover it. 00:24:09.774 [2024-04-24 22:15:51.771865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.772049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.772076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.774 qpair failed and we were unable to recover it. 00:24:09.774 [2024-04-24 22:15:51.772294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.772471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.772499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.774 qpair failed and we were unable to recover it. 00:24:09.774 [2024-04-24 22:15:51.772649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.772858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.772884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.774 qpair failed and we were unable to recover it. 00:24:09.774 [2024-04-24 22:15:51.773047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.773219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.773246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.774 qpair failed and we were unable to recover it. 00:24:09.774 [2024-04-24 22:15:51.773460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.773670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.773697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.774 qpair failed and we were unable to recover it. 00:24:09.774 [2024-04-24 22:15:51.773885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.774046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.774073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.774 qpair failed and we were unable to recover it. 00:24:09.774 [2024-04-24 22:15:51.774293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.774473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.774501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.774 qpair failed and we were unable to recover it. 00:24:09.774 [2024-04-24 22:15:51.774655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.774860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.774886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.774 qpair failed and we were unable to recover it. 00:24:09.774 [2024-04-24 22:15:51.775103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.775241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.775267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.774 qpair failed and we were unable to recover it. 00:24:09.774 [2024-04-24 22:15:51.775465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.775634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.775660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.774 qpair failed and we were unable to recover it. 00:24:09.774 [2024-04-24 22:15:51.775791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.775962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.775989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.774 qpair failed and we were unable to recover it. 00:24:09.774 [2024-04-24 22:15:51.776187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.776361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.776388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.774 qpair failed and we were unable to recover it. 00:24:09.774 [2024-04-24 22:15:51.776567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.776728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.776755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.774 qpair failed and we were unable to recover it. 00:24:09.774 [2024-04-24 22:15:51.776966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.777147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.777174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.774 qpair failed and we were unable to recover it. 00:24:09.774 [2024-04-24 22:15:51.777381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.777585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.777622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.774 qpair failed and we were unable to recover it. 00:24:09.774 [2024-04-24 22:15:51.777791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.777961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.777988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.774 qpair failed and we were unable to recover it. 00:24:09.774 [2024-04-24 22:15:51.778159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.778354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.778381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.774 qpair failed and we were unable to recover it. 00:24:09.774 [2024-04-24 22:15:51.778623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.778790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.778817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.774 qpair failed and we were unable to recover it. 00:24:09.774 [2024-04-24 22:15:51.779024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.779188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.779214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.774 qpair failed and we were unable to recover it. 00:24:09.774 [2024-04-24 22:15:51.779415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.774 [2024-04-24 22:15:51.779604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.779630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.775 qpair failed and we were unable to recover it. 00:24:09.775 [2024-04-24 22:15:51.779785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.779916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.779942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.775 qpair failed and we were unable to recover it. 00:24:09.775 [2024-04-24 22:15:51.780126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.780279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.780305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.775 qpair failed and we were unable to recover it. 00:24:09.775 [2024-04-24 22:15:51.780508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.780718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.780744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.775 qpair failed and we were unable to recover it. 00:24:09.775 [2024-04-24 22:15:51.780894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.781093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.781120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.775 qpair failed and we were unable to recover it. 00:24:09.775 [2024-04-24 22:15:51.781349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.781573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.781601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.775 qpair failed and we were unable to recover it. 00:24:09.775 [2024-04-24 22:15:51.781798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.781961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.781989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.775 qpair failed and we were unable to recover it. 00:24:09.775 [2024-04-24 22:15:51.782166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.782322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.782349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.775 qpair failed and we were unable to recover it. 00:24:09.775 [2024-04-24 22:15:51.782566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.782723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.782750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.775 qpair failed and we were unable to recover it. 00:24:09.775 [2024-04-24 22:15:51.782942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.783104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.783131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.775 qpair failed and we were unable to recover it. 00:24:09.775 [2024-04-24 22:15:51.783290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.783460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.783488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.775 qpair failed and we were unable to recover it. 00:24:09.775 [2024-04-24 22:15:51.783688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.783853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.783880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.775 qpair failed and we were unable to recover it. 00:24:09.775 [2024-04-24 22:15:51.784085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.784237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.784264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.775 qpair failed and we were unable to recover it. 00:24:09.775 [2024-04-24 22:15:51.784426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.784636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.784663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.775 qpair failed and we were unable to recover it. 00:24:09.775 [2024-04-24 22:15:51.784889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.785084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.785112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.775 qpair failed and we were unable to recover it. 00:24:09.775 [2024-04-24 22:15:51.785321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.785447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.785480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.775 qpair failed and we were unable to recover it. 00:24:09.775 [2024-04-24 22:15:51.785642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.785835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.785862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.775 qpair failed and we were unable to recover it. 00:24:09.775 [2024-04-24 22:15:51.786063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.786239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.786266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.775 qpair failed and we were unable to recover it. 00:24:09.775 [2024-04-24 22:15:51.786450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.786618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.786645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.775 qpair failed and we were unable to recover it. 00:24:09.775 [2024-04-24 22:15:51.786812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.787009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.787036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.775 qpair failed and we were unable to recover it. 00:24:09.775 [2024-04-24 22:15:51.787232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.787431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.787478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.775 qpair failed and we were unable to recover it. 00:24:09.775 [2024-04-24 22:15:51.787660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.787856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.787883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.775 qpair failed and we were unable to recover it. 00:24:09.775 [2024-04-24 22:15:51.788054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.788251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.788278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.775 qpair failed and we were unable to recover it. 00:24:09.775 [2024-04-24 22:15:51.788451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.788624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.775 [2024-04-24 22:15:51.788651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.775 qpair failed and we were unable to recover it. 00:24:09.775 [2024-04-24 22:15:51.788867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.789075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.789101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.776 qpair failed and we were unable to recover it. 00:24:09.776 [2024-04-24 22:15:51.789309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.789475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.789503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.776 qpair failed and we were unable to recover it. 00:24:09.776 [2024-04-24 22:15:51.789709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.789888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.789915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.776 qpair failed and we were unable to recover it. 00:24:09.776 [2024-04-24 22:15:51.790078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.790225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.790251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.776 qpair failed and we were unable to recover it. 00:24:09.776 [2024-04-24 22:15:51.790464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.790679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.790708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.776 qpair failed and we were unable to recover it. 00:24:09.776 [2024-04-24 22:15:51.791003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.791168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.791195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.776 qpair failed and we were unable to recover it. 00:24:09.776 [2024-04-24 22:15:51.791370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.791551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.791579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.776 qpair failed and we were unable to recover it. 00:24:09.776 [2024-04-24 22:15:51.791748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.791913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.791940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.776 qpair failed and we were unable to recover it. 00:24:09.776 [2024-04-24 22:15:51.792103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.792273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.792300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.776 qpair failed and we were unable to recover it. 00:24:09.776 [2024-04-24 22:15:51.792505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.792676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.792702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.776 qpair failed and we were unable to recover it. 00:24:09.776 [2024-04-24 22:15:51.792915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.793102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.793130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.776 qpair failed and we were unable to recover it. 00:24:09.776 [2024-04-24 22:15:51.793310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.793514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.793542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.776 qpair failed and we were unable to recover it. 00:24:09.776 [2024-04-24 22:15:51.793701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.793895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.793922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.776 qpair failed and we were unable to recover it. 00:24:09.776 [2024-04-24 22:15:51.794121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.794325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.794352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.776 qpair failed and we were unable to recover it. 00:24:09.776 [2024-04-24 22:15:51.794504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.794676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.794704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.776 qpair failed and we were unable to recover it. 00:24:09.776 [2024-04-24 22:15:51.794866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.795036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.795063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.776 qpair failed and we were unable to recover it. 00:24:09.776 [2024-04-24 22:15:51.795263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.795429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.795458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.776 qpair failed and we were unable to recover it. 00:24:09.776 [2024-04-24 22:15:51.795632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.795803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.795830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.776 qpair failed and we were unable to recover it. 00:24:09.776 [2024-04-24 22:15:51.796035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.796160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.796186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.776 qpair failed and we were unable to recover it. 00:24:09.776 [2024-04-24 22:15:51.796357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.796536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.796564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.776 qpair failed and we were unable to recover it. 00:24:09.776 [2024-04-24 22:15:51.796776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.796950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.796977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.776 qpair failed and we were unable to recover it. 00:24:09.776 [2024-04-24 22:15:51.797186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.797318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.797345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.776 qpair failed and we were unable to recover it. 00:24:09.776 [2024-04-24 22:15:51.797530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.797725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.797752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.776 qpair failed and we were unable to recover it. 00:24:09.776 [2024-04-24 22:15:51.797956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.798130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.798157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.776 qpair failed and we were unable to recover it. 00:24:09.776 [2024-04-24 22:15:51.798361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.798541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.798569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.776 qpair failed and we were unable to recover it. 00:24:09.776 [2024-04-24 22:15:51.798767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.798930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.798957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.776 qpair failed and we were unable to recover it. 00:24:09.776 [2024-04-24 22:15:51.799116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.799333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.799360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.776 qpair failed and we were unable to recover it. 00:24:09.776 [2024-04-24 22:15:51.799549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.799755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.776 [2024-04-24 22:15:51.799782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.776 qpair failed and we were unable to recover it. 00:24:09.777 [2024-04-24 22:15:51.799976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.800148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.800175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.777 qpair failed and we were unable to recover it. 00:24:09.777 [2024-04-24 22:15:51.800337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.800482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.800511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.777 qpair failed and we were unable to recover it. 00:24:09.777 [2024-04-24 22:15:51.800718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.800929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.800956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.777 qpair failed and we were unable to recover it. 00:24:09.777 [2024-04-24 22:15:51.801167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.801383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.801417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.777 qpair failed and we were unable to recover it. 00:24:09.777 [2024-04-24 22:15:51.801609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.801783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.801809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.777 qpair failed and we were unable to recover it. 00:24:09.777 [2024-04-24 22:15:51.801993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.802154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.802180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.777 qpair failed and we were unable to recover it. 00:24:09.777 [2024-04-24 22:15:51.802377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.802611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.802638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.777 qpair failed and we were unable to recover it. 00:24:09.777 [2024-04-24 22:15:51.802817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.802992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.803019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.777 qpair failed and we were unable to recover it. 00:24:09.777 [2024-04-24 22:15:51.803214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.803385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.803420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.777 qpair failed and we were unable to recover it. 00:24:09.777 [2024-04-24 22:15:51.803642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.803830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.803857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.777 qpair failed and we were unable to recover it. 00:24:09.777 [2024-04-24 22:15:51.803992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.804166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.804193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.777 qpair failed and we were unable to recover it. 00:24:09.777 [2024-04-24 22:15:51.804401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.804582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.804609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.777 qpair failed and we were unable to recover it. 00:24:09.777 [2024-04-24 22:15:51.804779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.804969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.804996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.777 qpair failed and we were unable to recover it. 00:24:09.777 [2024-04-24 22:15:51.805217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.805503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.805532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.777 qpair failed and we were unable to recover it. 00:24:09.777 [2024-04-24 22:15:51.805740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.805902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.805929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.777 qpair failed and we were unable to recover it. 00:24:09.777 [2024-04-24 22:15:51.806127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.806265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.806291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.777 qpair failed and we were unable to recover it. 00:24:09.777 [2024-04-24 22:15:51.806434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.806634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.806660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.777 qpair failed and we were unable to recover it. 00:24:09.777 [2024-04-24 22:15:51.806862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.807032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.807058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.777 qpair failed and we were unable to recover it. 00:24:09.777 [2024-04-24 22:15:51.807255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.807435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.807462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.777 qpair failed and we were unable to recover it. 00:24:09.777 [2024-04-24 22:15:51.807599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.807795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.807822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.777 qpair failed and we were unable to recover it. 00:24:09.777 [2024-04-24 22:15:51.808064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.808242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.808269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.777 qpair failed and we were unable to recover it. 00:24:09.777 [2024-04-24 22:15:51.808483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.808686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.808712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.777 qpair failed and we were unable to recover it. 00:24:09.777 [2024-04-24 22:15:51.808924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.809123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.809151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.777 qpair failed and we were unable to recover it. 00:24:09.777 [2024-04-24 22:15:51.809341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.809505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.809533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.777 qpair failed and we were unable to recover it. 00:24:09.777 [2024-04-24 22:15:51.809702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.777 [2024-04-24 22:15:51.809897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.809924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.778 qpair failed and we were unable to recover it. 00:24:09.778 [2024-04-24 22:15:51.810122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.810318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.810345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.778 qpair failed and we were unable to recover it. 00:24:09.778 [2024-04-24 22:15:51.810543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.810740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.810767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.778 qpair failed and we were unable to recover it. 00:24:09.778 [2024-04-24 22:15:51.810939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.811134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.811160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.778 qpair failed and we were unable to recover it. 00:24:09.778 [2024-04-24 22:15:51.811361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.811535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.811563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.778 qpair failed and we were unable to recover it. 00:24:09.778 [2024-04-24 22:15:51.811737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.811937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.811964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.778 qpair failed and we were unable to recover it. 00:24:09.778 [2024-04-24 22:15:51.812181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.812354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.812381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.778 qpair failed and we were unable to recover it. 00:24:09.778 [2024-04-24 22:15:51.812581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.812785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.812812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.778 qpair failed and we were unable to recover it. 00:24:09.778 [2024-04-24 22:15:51.812993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.813137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.813165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.778 qpair failed and we were unable to recover it. 00:24:09.778 [2024-04-24 22:15:51.813363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.813546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.813574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.778 qpair failed and we were unable to recover it. 00:24:09.778 [2024-04-24 22:15:51.813741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.813937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.813963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.778 qpair failed and we were unable to recover it. 00:24:09.778 [2024-04-24 22:15:51.814174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.814355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.814382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.778 qpair failed and we were unable to recover it. 00:24:09.778 [2024-04-24 22:15:51.814604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.814765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.814791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.778 qpair failed and we were unable to recover it. 00:24:09.778 [2024-04-24 22:15:51.814960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.815168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.815195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.778 qpair failed and we were unable to recover it. 00:24:09.778 [2024-04-24 22:15:51.815363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.815582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.815610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.778 qpair failed and we were unable to recover it. 00:24:09.778 [2024-04-24 22:15:51.815844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.816005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.816032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.778 qpair failed and we were unable to recover it. 00:24:09.778 [2024-04-24 22:15:51.816211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.816344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.816370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.778 qpair failed and we were unable to recover it. 00:24:09.778 [2024-04-24 22:15:51.816581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.816753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.816780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.778 qpair failed and we were unable to recover it. 00:24:09.778 [2024-04-24 22:15:51.816970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.817131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.817163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.778 qpair failed and we were unable to recover it. 00:24:09.778 [2024-04-24 22:15:51.817322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.817487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.817515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.778 qpair failed and we were unable to recover it. 00:24:09.778 [2024-04-24 22:15:51.817739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.817973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.818000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.778 qpair failed and we were unable to recover it. 00:24:09.778 [2024-04-24 22:15:51.818197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.818458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.778 [2024-04-24 22:15:51.818485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.778 qpair failed and we were unable to recover it. 00:24:09.779 [2024-04-24 22:15:51.818657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.818848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.818874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.779 qpair failed and we were unable to recover it. 00:24:09.779 [2024-04-24 22:15:51.819005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.819202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.819229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.779 qpair failed and we were unable to recover it. 00:24:09.779 [2024-04-24 22:15:51.819493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.819688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.819715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.779 qpair failed and we were unable to recover it. 00:24:09.779 [2024-04-24 22:15:51.819915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.820113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.820140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.779 qpair failed and we were unable to recover it. 00:24:09.779 [2024-04-24 22:15:51.820364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.820529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.820556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.779 qpair failed and we were unable to recover it. 00:24:09.779 [2024-04-24 22:15:51.820734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.820914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.820941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.779 qpair failed and we were unable to recover it. 00:24:09.779 [2024-04-24 22:15:51.821146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.821353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.821385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.779 qpair failed and we were unable to recover it. 00:24:09.779 [2024-04-24 22:15:51.821572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.821743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.821770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.779 qpair failed and we were unable to recover it. 00:24:09.779 [2024-04-24 22:15:51.821914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.822121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.822148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.779 qpair failed and we were unable to recover it. 00:24:09.779 [2024-04-24 22:15:51.822430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.822597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.822624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.779 qpair failed and we were unable to recover it. 00:24:09.779 [2024-04-24 22:15:51.822812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.823015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.823042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.779 qpair failed and we were unable to recover it. 00:24:09.779 [2024-04-24 22:15:51.823232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.823375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.823409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.779 qpair failed and we were unable to recover it. 00:24:09.779 [2024-04-24 22:15:51.823583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.823786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.823813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.779 qpair failed and we were unable to recover it. 00:24:09.779 [2024-04-24 22:15:51.823996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.824169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.824195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.779 qpair failed and we were unable to recover it. 00:24:09.779 [2024-04-24 22:15:51.824408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.824552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.824579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.779 qpair failed and we were unable to recover it. 00:24:09.779 [2024-04-24 22:15:51.824790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.824985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.825012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.779 qpair failed and we were unable to recover it. 00:24:09.779 [2024-04-24 22:15:51.825211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.825374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.825413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.779 qpair failed and we were unable to recover it. 00:24:09.779 [2024-04-24 22:15:51.825595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.825772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.825799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.779 qpair failed and we were unable to recover it. 00:24:09.779 [2024-04-24 22:15:51.825951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.826121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.826148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.779 qpair failed and we were unable to recover it. 00:24:09.779 [2024-04-24 22:15:51.826321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.826507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.826536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.779 qpair failed and we were unable to recover it. 00:24:09.779 [2024-04-24 22:15:51.826751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.826937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.826964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.779 qpair failed and we were unable to recover it. 00:24:09.779 [2024-04-24 22:15:51.827138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.827357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.827383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.779 qpair failed and we were unable to recover it. 00:24:09.779 [2024-04-24 22:15:51.827564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.827781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.827808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.779 qpair failed and we were unable to recover it. 00:24:09.779 [2024-04-24 22:15:51.827996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.828196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.828223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.779 qpair failed and we were unable to recover it. 00:24:09.779 [2024-04-24 22:15:51.828431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.828590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.828617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.779 qpair failed and we were unable to recover it. 00:24:09.779 [2024-04-24 22:15:51.828779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.828962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.828989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.779 qpair failed and we were unable to recover it. 00:24:09.779 [2024-04-24 22:15:51.829133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.779 [2024-04-24 22:15:51.829317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.829349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.780 qpair failed and we were unable to recover it. 00:24:09.780 [2024-04-24 22:15:51.829515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.829655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.829682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.780 qpair failed and we were unable to recover it. 00:24:09.780 [2024-04-24 22:15:51.829839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.829993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.830020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.780 qpair failed and we were unable to recover it. 00:24:09.780 [2024-04-24 22:15:51.830225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.830431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.830459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.780 qpair failed and we were unable to recover it. 00:24:09.780 [2024-04-24 22:15:51.830666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.830812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.830839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.780 qpair failed and we were unable to recover it. 00:24:09.780 [2024-04-24 22:15:51.831039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.831234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.831261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.780 qpair failed and we were unable to recover it. 00:24:09.780 [2024-04-24 22:15:51.831436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.831615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.831642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.780 qpair failed and we were unable to recover it. 00:24:09.780 [2024-04-24 22:15:51.831856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.832008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.832035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.780 qpair failed and we were unable to recover it. 00:24:09.780 [2024-04-24 22:15:51.832204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.832408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.832436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.780 qpair failed and we were unable to recover it. 00:24:09.780 [2024-04-24 22:15:51.832626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.832790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.832817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.780 qpair failed and we were unable to recover it. 00:24:09.780 [2024-04-24 22:15:51.833028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.833195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.833221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.780 qpair failed and we were unable to recover it. 00:24:09.780 [2024-04-24 22:15:51.833399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.833587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.833614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.780 qpair failed and we were unable to recover it. 00:24:09.780 [2024-04-24 22:15:51.833779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.833907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.833934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.780 qpair failed and we were unable to recover it. 00:24:09.780 [2024-04-24 22:15:51.834116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.834252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.834279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.780 qpair failed and we were unable to recover it. 00:24:09.780 [2024-04-24 22:15:51.834433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.834574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.834602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.780 qpair failed and we were unable to recover it. 00:24:09.780 [2024-04-24 22:15:51.834741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.834911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.834938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.780 qpair failed and we were unable to recover it. 00:24:09.780 [2024-04-24 22:15:51.835100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.835248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.835275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.780 qpair failed and we were unable to recover it. 00:24:09.780 [2024-04-24 22:15:51.835458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.835586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.835613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.780 qpair failed and we were unable to recover it. 00:24:09.780 [2024-04-24 22:15:51.835772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.835978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.836005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.780 qpair failed and we were unable to recover it. 00:24:09.780 [2024-04-24 22:15:51.836147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.836345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.836371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.780 qpair failed and we were unable to recover it. 00:24:09.780 [2024-04-24 22:15:51.836557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.836711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.836738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.780 qpair failed and we were unable to recover it. 00:24:09.780 [2024-04-24 22:15:51.836920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.837062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.837089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.780 qpair failed and we were unable to recover it. 00:24:09.780 [2024-04-24 22:15:51.837293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.837481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.837508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.780 qpair failed and we were unable to recover it. 00:24:09.780 [2024-04-24 22:15:51.837658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.837853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.837880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.780 qpair failed and we were unable to recover it. 00:24:09.780 [2024-04-24 22:15:51.838013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.838207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.838233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.780 qpair failed and we were unable to recover it. 00:24:09.780 [2024-04-24 22:15:51.838379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.838547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.780 [2024-04-24 22:15:51.838575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.780 qpair failed and we were unable to recover it. 00:24:09.780 [2024-04-24 22:15:51.838777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.838967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.838994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.781 qpair failed and we were unable to recover it. 00:24:09.781 [2024-04-24 22:15:51.839187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.839350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.839377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.781 qpair failed and we were unable to recover it. 00:24:09.781 [2024-04-24 22:15:51.839562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.839729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.839756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.781 qpair failed and we were unable to recover it. 00:24:09.781 [2024-04-24 22:15:51.839913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.840054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.840080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.781 qpair failed and we were unable to recover it. 00:24:09.781 [2024-04-24 22:15:51.840266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.840448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.840477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.781 qpair failed and we were unable to recover it. 00:24:09.781 [2024-04-24 22:15:51.840626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.840769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.840796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.781 qpair failed and we were unable to recover it. 00:24:09.781 [2024-04-24 22:15:51.840997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.841181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.841208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.781 qpair failed and we were unable to recover it. 00:24:09.781 [2024-04-24 22:15:51.841388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.841543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.841570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.781 qpair failed and we were unable to recover it. 00:24:09.781 [2024-04-24 22:15:51.841779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.841923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.841950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.781 qpair failed and we were unable to recover it. 00:24:09.781 [2024-04-24 22:15:51.842128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.842267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.842294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.781 qpair failed and we were unable to recover it. 00:24:09.781 [2024-04-24 22:15:51.842436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.842605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.842632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.781 qpair failed and we were unable to recover it. 00:24:09.781 [2024-04-24 22:15:51.842805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.843008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.843034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.781 qpair failed and we were unable to recover it. 00:24:09.781 [2024-04-24 22:15:51.843245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.843449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.843477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.781 qpair failed and we were unable to recover it. 00:24:09.781 [2024-04-24 22:15:51.843638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.843767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.843794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.781 qpair failed and we were unable to recover it. 00:24:09.781 [2024-04-24 22:15:51.843982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.844137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.844163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.781 qpair failed and we were unable to recover it. 00:24:09.781 [2024-04-24 22:15:51.844381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.844543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.844571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.781 qpair failed and we were unable to recover it. 00:24:09.781 [2024-04-24 22:15:51.844708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.844873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.844900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.781 qpair failed and we were unable to recover it. 00:24:09.781 [2024-04-24 22:15:51.845077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.845214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.845250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.781 qpair failed and we were unable to recover it. 00:24:09.781 [2024-04-24 22:15:51.845430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.845571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.845598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.781 qpair failed and we were unable to recover it. 00:24:09.781 [2024-04-24 22:15:51.845757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.845913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.845940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.781 qpair failed and we were unable to recover it. 00:24:09.781 [2024-04-24 22:15:51.846118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.846276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.846303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.781 qpair failed and we were unable to recover it. 00:24:09.781 [2024-04-24 22:15:51.846496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.846636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.846662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.781 qpair failed and we were unable to recover it. 00:24:09.781 [2024-04-24 22:15:51.846835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.846959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.846986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.781 qpair failed and we were unable to recover it. 00:24:09.781 [2024-04-24 22:15:51.847184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.847353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.847380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.781 qpair failed and we were unable to recover it. 00:24:09.781 [2024-04-24 22:15:51.847553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.847727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.847754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.781 qpair failed and we were unable to recover it. 00:24:09.781 [2024-04-24 22:15:51.847932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.848101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.848128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.781 qpair failed and we were unable to recover it. 00:24:09.781 [2024-04-24 22:15:51.848313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.848483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.848511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.781 qpair failed and we were unable to recover it. 00:24:09.781 [2024-04-24 22:15:51.848646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.848866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.781 [2024-04-24 22:15:51.848893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.782 qpair failed and we were unable to recover it. 00:24:09.782 [2024-04-24 22:15:51.849079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.849281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.849308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.782 qpair failed and we were unable to recover it. 00:24:09.782 [2024-04-24 22:15:51.849506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.849683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.849710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.782 qpair failed and we were unable to recover it. 00:24:09.782 [2024-04-24 22:15:51.849890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.850088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.850114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.782 qpair failed and we were unable to recover it. 00:24:09.782 [2024-04-24 22:15:51.850282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.850469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.850496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.782 qpair failed and we were unable to recover it. 00:24:09.782 [2024-04-24 22:15:51.850641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.850813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.850839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.782 qpair failed and we were unable to recover it. 00:24:09.782 [2024-04-24 22:15:51.851031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.851206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.851232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.782 qpair failed and we were unable to recover it. 00:24:09.782 [2024-04-24 22:15:51.851429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.851597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.851623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.782 qpair failed and we were unable to recover it. 00:24:09.782 [2024-04-24 22:15:51.851796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.851962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.851990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.782 qpair failed and we were unable to recover it. 00:24:09.782 [2024-04-24 22:15:51.852163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.852330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.852357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.782 qpair failed and we were unable to recover it. 00:24:09.782 [2024-04-24 22:15:51.852542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.852742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.852769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.782 qpair failed and we were unable to recover it. 00:24:09.782 [2024-04-24 22:15:51.852978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.853138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.853165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.782 qpair failed and we were unable to recover it. 00:24:09.782 [2024-04-24 22:15:51.853379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.853592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.853619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.782 qpair failed and we were unable to recover it. 00:24:09.782 [2024-04-24 22:15:51.853785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.853965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.853992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.782 qpair failed and we were unable to recover it. 00:24:09.782 [2024-04-24 22:15:51.854148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.854328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.854355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.782 qpair failed and we were unable to recover it. 00:24:09.782 [2024-04-24 22:15:51.854547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.854734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.854761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.782 qpair failed and we were unable to recover it. 00:24:09.782 [2024-04-24 22:15:51.854941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.855109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.855137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.782 qpair failed and we were unable to recover it. 00:24:09.782 [2024-04-24 22:15:51.855306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.855496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.855524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.782 qpair failed and we were unable to recover it. 00:24:09.782 [2024-04-24 22:15:51.855678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.855859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.855886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.782 qpair failed and we were unable to recover it. 00:24:09.782 [2024-04-24 22:15:51.856067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.856203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.856229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.782 qpair failed and we were unable to recover it. 00:24:09.782 [2024-04-24 22:15:51.856425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.856560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.856589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.782 qpair failed and we were unable to recover it. 00:24:09.782 [2024-04-24 22:15:51.856770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.856911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.856941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.782 qpair failed and we were unable to recover it. 00:24:09.782 [2024-04-24 22:15:51.857140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.857309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.857336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.782 qpair failed and we were unable to recover it. 00:24:09.782 [2024-04-24 22:15:51.857482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.857676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.857703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.782 qpair failed and we were unable to recover it. 00:24:09.782 [2024-04-24 22:15:51.857865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.858062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.858089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.782 qpair failed and we were unable to recover it. 00:24:09.782 [2024-04-24 22:15:51.858268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.858441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.782 [2024-04-24 22:15:51.858469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.782 qpair failed and we were unable to recover it. 00:24:09.783 [2024-04-24 22:15:51.858611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.858776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.858803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.783 qpair failed and we were unable to recover it. 00:24:09.783 [2024-04-24 22:15:51.858987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.859200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.859227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.783 qpair failed and we were unable to recover it. 00:24:09.783 [2024-04-24 22:15:51.859421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.859553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.859581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.783 qpair failed and we were unable to recover it. 00:24:09.783 [2024-04-24 22:15:51.859709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.859883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.859910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.783 qpair failed and we were unable to recover it. 00:24:09.783 [2024-04-24 22:15:51.860118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.860285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.860312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.783 qpair failed and we were unable to recover it. 00:24:09.783 [2024-04-24 22:15:51.860496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.860649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.860676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.783 qpair failed and we were unable to recover it. 00:24:09.783 [2024-04-24 22:15:51.860836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.860992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.861019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.783 qpair failed and we were unable to recover it. 00:24:09.783 [2024-04-24 22:15:51.861158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.861364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.861390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.783 qpair failed and we were unable to recover it. 00:24:09.783 [2024-04-24 22:15:51.861583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.861740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.861767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.783 qpair failed and we were unable to recover it. 00:24:09.783 [2024-04-24 22:15:51.861907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.862082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.862109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.783 qpair failed and we were unable to recover it. 00:24:09.783 [2024-04-24 22:15:51.862284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.862439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.862476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.783 qpair failed and we were unable to recover it. 00:24:09.783 [2024-04-24 22:15:51.862616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.862791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.862818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.783 qpair failed and we were unable to recover it. 00:24:09.783 [2024-04-24 22:15:51.862995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.863166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.863193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.783 qpair failed and we were unable to recover it. 00:24:09.783 [2024-04-24 22:15:51.863361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.863548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.863577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.783 qpair failed and we were unable to recover it. 00:24:09.783 [2024-04-24 22:15:51.863741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.863910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.863937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.783 qpair failed and we were unable to recover it. 00:24:09.783 [2024-04-24 22:15:51.864100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.864272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.864299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.783 qpair failed and we were unable to recover it. 00:24:09.783 [2024-04-24 22:15:51.864510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.864678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.864704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.783 qpair failed and we were unable to recover it. 00:24:09.783 [2024-04-24 22:15:51.864919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.865049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.865075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.783 qpair failed and we were unable to recover it. 00:24:09.783 [2024-04-24 22:15:51.865253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.865477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.865504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.783 qpair failed and we were unable to recover it. 00:24:09.783 [2024-04-24 22:15:51.865665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.865835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.865862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.783 qpair failed and we were unable to recover it. 00:24:09.783 [2024-04-24 22:15:51.866027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.866201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.866228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.783 qpair failed and we were unable to recover it. 00:24:09.783 [2024-04-24 22:15:51.866368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.866548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.866576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.783 qpair failed and we were unable to recover it. 00:24:09.783 [2024-04-24 22:15:51.866747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.866933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.866961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.783 qpair failed and we were unable to recover it. 00:24:09.783 [2024-04-24 22:15:51.867162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.783 [2024-04-24 22:15:51.867375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.867421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.784 qpair failed and we were unable to recover it. 00:24:09.784 [2024-04-24 22:15:51.867607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.867779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.867806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.784 qpair failed and we were unable to recover it. 00:24:09.784 [2024-04-24 22:15:51.867946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.868070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.868097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.784 qpair failed and we were unable to recover it. 00:24:09.784 [2024-04-24 22:15:51.868252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.868443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.868472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.784 qpair failed and we were unable to recover it. 00:24:09.784 [2024-04-24 22:15:51.868617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.868815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.868842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.784 qpair failed and we were unable to recover it. 00:24:09.784 [2024-04-24 22:15:51.868993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.869206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.869233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.784 qpair failed and we were unable to recover it. 00:24:09.784 [2024-04-24 22:15:51.869411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.869580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.869607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.784 qpair failed and we were unable to recover it. 00:24:09.784 [2024-04-24 22:15:51.869813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.869965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.869992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.784 qpair failed and we were unable to recover it. 00:24:09.784 [2024-04-24 22:15:51.870116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.870322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.870349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.784 qpair failed and we were unable to recover it. 00:24:09.784 [2024-04-24 22:15:51.870536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.870723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.870750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.784 qpair failed and we were unable to recover it. 00:24:09.784 [2024-04-24 22:15:51.870889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.871068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.871094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.784 qpair failed and we were unable to recover it. 00:24:09.784 [2024-04-24 22:15:51.871270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.871449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.871477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.784 qpair failed and we were unable to recover it. 00:24:09.784 [2024-04-24 22:15:51.871651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.871789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.871816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.784 qpair failed and we were unable to recover it. 00:24:09.784 [2024-04-24 22:15:51.871992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.872160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.872187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.784 qpair failed and we were unable to recover it. 00:24:09.784 [2024-04-24 22:15:51.872350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.872521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.872549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.784 qpair failed and we were unable to recover it. 00:24:09.784 [2024-04-24 22:15:51.872709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.872905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.872932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.784 qpair failed and we were unable to recover it. 00:24:09.784 [2024-04-24 22:15:51.873105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.873302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.873329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.784 qpair failed and we were unable to recover it. 00:24:09.784 [2024-04-24 22:15:51.873524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.873700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.873727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.784 qpair failed and we were unable to recover it. 00:24:09.784 [2024-04-24 22:15:51.873915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.874055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.874082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.784 qpair failed and we were unable to recover it. 00:24:09.784 [2024-04-24 22:15:51.874270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.874461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.874489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.784 qpair failed and we were unable to recover it. 00:24:09.784 [2024-04-24 22:15:51.874655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.874869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.874896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.784 qpair failed and we were unable to recover it. 00:24:09.784 [2024-04-24 22:15:51.875092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.875301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.875328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.784 qpair failed and we were unable to recover it. 00:24:09.784 [2024-04-24 22:15:51.875525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.875690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.875717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.784 qpair failed and we were unable to recover it. 00:24:09.784 [2024-04-24 22:15:51.875899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.876115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.876142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.784 qpair failed and we were unable to recover it. 00:24:09.784 [2024-04-24 22:15:51.876304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.876449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.876477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.784 qpair failed and we were unable to recover it. 00:24:09.784 [2024-04-24 22:15:51.876644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.876814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.876841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.784 qpair failed and we were unable to recover it. 00:24:09.784 [2024-04-24 22:15:51.877010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.877206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.877232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.784 qpair failed and we were unable to recover it. 00:24:09.784 [2024-04-24 22:15:51.877444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.877647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.877674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.784 qpair failed and we were unable to recover it. 00:24:09.784 [2024-04-24 22:15:51.877831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.877991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.784 [2024-04-24 22:15:51.878017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.784 qpair failed and we were unable to recover it. 00:24:09.784 [2024-04-24 22:15:51.878192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.878371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.878411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.785 qpair failed and we were unable to recover it. 00:24:09.785 [2024-04-24 22:15:51.878610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.878754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.878780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.785 qpair failed and we were unable to recover it. 00:24:09.785 [2024-04-24 22:15:51.878952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.879157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.879183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.785 qpair failed and we were unable to recover it. 00:24:09.785 [2024-04-24 22:15:51.879343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.879521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.879549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.785 qpair failed and we were unable to recover it. 00:24:09.785 [2024-04-24 22:15:51.879738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.879899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.879926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.785 qpair failed and we were unable to recover it. 00:24:09.785 [2024-04-24 22:15:51.880134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.880306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.880333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.785 qpair failed and we were unable to recover it. 00:24:09.785 [2024-04-24 22:15:51.880497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.880695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.880722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.785 qpair failed and we were unable to recover it. 00:24:09.785 [2024-04-24 22:15:51.880908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.881051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.881078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.785 qpair failed and we were unable to recover it. 00:24:09.785 [2024-04-24 22:15:51.881243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.881444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.881472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.785 qpair failed and we were unable to recover it. 00:24:09.785 [2024-04-24 22:15:51.881605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.881828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.881855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.785 qpair failed and we were unable to recover it. 00:24:09.785 [2024-04-24 22:15:51.882029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.882215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.882247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.785 qpair failed and we were unable to recover it. 00:24:09.785 [2024-04-24 22:15:51.882408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.882569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.882596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.785 qpair failed and we were unable to recover it. 00:24:09.785 [2024-04-24 22:15:51.882769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.882939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.882966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.785 qpair failed and we were unable to recover it. 00:24:09.785 [2024-04-24 22:15:51.883137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.883296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.883323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.785 qpair failed and we were unable to recover it. 00:24:09.785 [2024-04-24 22:15:51.883490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.883636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.883663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.785 qpair failed and we were unable to recover it. 00:24:09.785 [2024-04-24 22:15:51.883813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.883987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.884014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.785 qpair failed and we were unable to recover it. 00:24:09.785 [2024-04-24 22:15:51.884179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.884320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.884346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.785 qpair failed and we were unable to recover it. 00:24:09.785 [2024-04-24 22:15:51.884576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.884733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.884759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.785 qpair failed and we were unable to recover it. 00:24:09.785 [2024-04-24 22:15:51.884919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.885085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.885111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.785 qpair failed and we were unable to recover it. 00:24:09.785 [2024-04-24 22:15:51.885280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.885498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.885526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.785 qpair failed and we were unable to recover it. 00:24:09.785 [2024-04-24 22:15:51.885687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.885873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.885904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.785 qpair failed and we were unable to recover it. 00:24:09.785 [2024-04-24 22:15:51.886110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.886245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.886283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.785 qpair failed and we were unable to recover it. 00:24:09.785 [2024-04-24 22:15:51.886455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.886651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.886678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.785 qpair failed and we were unable to recover it. 00:24:09.785 [2024-04-24 22:15:51.886811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.887019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.887046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.785 qpair failed and we were unable to recover it. 00:24:09.785 [2024-04-24 22:15:51.887254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.887376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.887410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.785 qpair failed and we were unable to recover it. 00:24:09.785 [2024-04-24 22:15:51.887551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.887693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.887720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.785 qpair failed and we were unable to recover it. 00:24:09.785 [2024-04-24 22:15:51.887860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.888025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.888052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.785 qpair failed and we were unable to recover it. 00:24:09.785 [2024-04-24 22:15:51.888215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.888382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.888428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.785 qpair failed and we were unable to recover it. 00:24:09.785 [2024-04-24 22:15:51.888629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.888794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.785 [2024-04-24 22:15:51.888821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.785 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-24 22:15:51.889016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.889185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.889212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-24 22:15:51.889389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.889601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.889633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-24 22:15:51.889838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.889974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.890001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-24 22:15:51.890190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.890390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.890427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-24 22:15:51.890568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.890761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.890788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-24 22:15:51.890951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.891156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.891183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-24 22:15:51.891369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.891546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.891575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-24 22:15:51.891772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.891964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.891991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-24 22:15:51.892190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.892381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.892418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-24 22:15:51.892556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.892725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.892752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-24 22:15:51.892897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.893059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.893086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-24 22:15:51.893253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.893385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.893422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-24 22:15:51.893590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.893763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.893790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-24 22:15:51.893953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.894158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.894185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-24 22:15:51.894346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.894500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.894529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-24 22:15:51.894707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.894870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.894897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-24 22:15:51.895098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.895262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.895289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-24 22:15:51.895431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.895575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.895602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-24 22:15:51.895810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.895951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.895978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-24 22:15:51.896166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.896334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.896361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-24 22:15:51.896561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.896713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.896740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-24 22:15:51.896953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.897114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.897141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-24 22:15:51.897354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.897526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.897554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-24 22:15:51.897702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.897886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.897913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-24 22:15:51.898081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.898255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.898281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-24 22:15:51.898450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.898657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.898684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-24 22:15:51.898847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.899010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.899036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-24 22:15:51.899173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.899354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.899381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-24 22:15:51.899575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.786 [2024-04-24 22:15:51.899746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.899773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-24 22:15:51.899985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.900128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.900156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-24 22:15:51.900293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.900456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.900484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-24 22:15:51.900659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.900869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.900896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-24 22:15:51.901085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.901253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.901280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-24 22:15:51.901477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.901626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.901654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-24 22:15:51.901832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.902024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.902051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-24 22:15:51.902177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.902341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.902367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-24 22:15:51.902542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.902680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.902706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-24 22:15:51.902871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.903063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.903090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-24 22:15:51.903304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.903484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.903512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-24 22:15:51.903657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.903824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.903851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-24 22:15:51.904025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.904178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.904214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-24 22:15:51.904380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.904570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.904598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-24 22:15:51.904777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.904988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.905015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-24 22:15:51.905221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.905410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.905439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-24 22:15:51.905582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.905793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.905820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-24 22:15:51.906014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.906208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.906235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-24 22:15:51.906459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.906586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.906612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-24 22:15:51.906740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.906870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.906897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-24 22:15:51.907081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.907273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.907300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-24 22:15:51.907506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.907688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.907715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-24 22:15:51.907917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.908126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.908152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-24 22:15:51.908346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.908500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.908529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-24 22:15:51.908719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.787 [2024-04-24 22:15:51.908925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.908953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.788 qpair failed and we were unable to recover it. 00:24:09.788 [2024-04-24 22:15:51.909151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.909312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.909338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.788 qpair failed and we were unable to recover it. 00:24:09.788 [2024-04-24 22:15:51.909487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.909659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.909686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.788 qpair failed and we were unable to recover it. 00:24:09.788 [2024-04-24 22:15:51.909852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.909995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.910022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.788 qpair failed and we were unable to recover it. 00:24:09.788 [2024-04-24 22:15:51.910149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.910307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.910334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.788 qpair failed and we were unable to recover it. 00:24:09.788 [2024-04-24 22:15:51.910533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.910714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.910741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.788 qpair failed and we were unable to recover it. 00:24:09.788 [2024-04-24 22:15:51.910877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.911078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.911105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.788 qpair failed and we were unable to recover it. 00:24:09.788 [2024-04-24 22:15:51.911259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.911452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.911480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.788 qpair failed and we were unable to recover it. 00:24:09.788 [2024-04-24 22:15:51.911650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.911817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.911844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.788 qpair failed and we were unable to recover it. 00:24:09.788 [2024-04-24 22:15:51.912044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.912213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.912240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.788 qpair failed and we were unable to recover it. 00:24:09.788 [2024-04-24 22:15:51.912428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.912602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.912629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.788 qpair failed and we were unable to recover it. 00:24:09.788 [2024-04-24 22:15:51.912769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.912942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.912969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.788 qpair failed and we were unable to recover it. 00:24:09.788 [2024-04-24 22:15:51.913102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.913274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.913301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.788 qpair failed and we were unable to recover it. 00:24:09.788 [2024-04-24 22:15:51.913450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.913619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.913646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.788 qpair failed and we were unable to recover it. 00:24:09.788 [2024-04-24 22:15:51.913822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.914028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.914054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.788 qpair failed and we were unable to recover it. 00:24:09.788 [2024-04-24 22:15:51.914175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.914358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.914384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.788 qpair failed and we were unable to recover it. 00:24:09.788 [2024-04-24 22:15:51.914543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.914712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.914739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.788 qpair failed and we were unable to recover it. 00:24:09.788 [2024-04-24 22:15:51.914910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.915070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.915097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.788 qpair failed and we were unable to recover it. 00:24:09.788 [2024-04-24 22:15:51.915228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.915424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.915452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.788 qpair failed and we were unable to recover it. 00:24:09.788 [2024-04-24 22:15:51.915653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.915825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.915852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.788 qpair failed and we were unable to recover it. 00:24:09.788 [2024-04-24 22:15:51.916017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.916195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.916222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.788 qpair failed and we were unable to recover it. 00:24:09.788 [2024-04-24 22:15:51.916368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.916551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.916579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.788 qpair failed and we were unable to recover it. 00:24:09.788 [2024-04-24 22:15:51.916777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.916986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.917012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.788 qpair failed and we were unable to recover it. 00:24:09.788 [2024-04-24 22:15:51.917199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.917414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.917442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.788 qpair failed and we were unable to recover it. 00:24:09.788 [2024-04-24 22:15:51.917624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.917792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.917820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.788 qpair failed and we were unable to recover it. 00:24:09.788 [2024-04-24 22:15:51.917988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.918158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.918185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.788 qpair failed and we were unable to recover it. 00:24:09.788 [2024-04-24 22:15:51.918355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.918534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.918562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.788 qpair failed and we were unable to recover it. 00:24:09.788 [2024-04-24 22:15:51.918730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.918888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.918915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.788 qpair failed and we were unable to recover it. 00:24:09.788 [2024-04-24 22:15:51.919118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.919289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.788 [2024-04-24 22:15:51.919316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.788 qpair failed and we were unable to recover it. 00:24:09.788 [2024-04-24 22:15:51.919526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.919694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.919721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.789 qpair failed and we were unable to recover it. 00:24:09.789 [2024-04-24 22:15:51.919912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.920108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.920135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.789 qpair failed and we were unable to recover it. 00:24:09.789 [2024-04-24 22:15:51.920301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.920509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.920537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.789 qpair failed and we were unable to recover it. 00:24:09.789 [2024-04-24 22:15:51.920711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.920881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.920908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.789 qpair failed and we were unable to recover it. 00:24:09.789 [2024-04-24 22:15:51.921081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.921245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.921271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.789 qpair failed and we were unable to recover it. 00:24:09.789 [2024-04-24 22:15:51.921479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.921681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.921709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.789 qpair failed and we were unable to recover it. 00:24:09.789 [2024-04-24 22:15:51.921885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.922085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.922112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.789 qpair failed and we were unable to recover it. 00:24:09.789 [2024-04-24 22:15:51.922322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.922446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.922474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.789 qpair failed and we were unable to recover it. 00:24:09.789 [2024-04-24 22:15:51.922690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.922840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.922867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.789 qpair failed and we were unable to recover it. 00:24:09.789 [2024-04-24 22:15:51.923040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.923186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.923212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.789 qpair failed and we were unable to recover it. 00:24:09.789 [2024-04-24 22:15:51.923380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.923588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.923616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.789 qpair failed and we were unable to recover it. 00:24:09.789 [2024-04-24 22:15:51.923813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.923979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.924007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.789 qpair failed and we were unable to recover it. 00:24:09.789 [2024-04-24 22:15:51.924177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.924373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.924415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.789 qpair failed and we were unable to recover it. 00:24:09.789 [2024-04-24 22:15:51.924613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.924766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.924793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.789 qpair failed and we were unable to recover it. 00:24:09.789 [2024-04-24 22:15:51.924989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.925152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.925179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.789 qpair failed and we were unable to recover it. 00:24:09.789 [2024-04-24 22:15:51.925348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.925535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.925564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.789 qpair failed and we were unable to recover it. 00:24:09.789 [2024-04-24 22:15:51.925741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.925915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.925942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.789 qpair failed and we were unable to recover it. 00:24:09.789 [2024-04-24 22:15:51.926107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.926274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.926301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.789 qpair failed and we were unable to recover it. 00:24:09.789 [2024-04-24 22:15:51.926486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.926676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.926703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.789 qpair failed and we were unable to recover it. 00:24:09.789 [2024-04-24 22:15:51.926856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.926986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.927013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.789 qpair failed and we were unable to recover it. 00:24:09.789 [2024-04-24 22:15:51.927220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.927429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.927457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.789 qpair failed and we were unable to recover it. 00:24:09.789 [2024-04-24 22:15:51.927595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.927755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.927782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.789 qpair failed and we were unable to recover it. 00:24:09.789 [2024-04-24 22:15:51.927978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.928146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.928174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.789 qpair failed and we were unable to recover it. 00:24:09.789 [2024-04-24 22:15:51.928344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.928549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.928577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.789 qpair failed and we were unable to recover it. 00:24:09.789 [2024-04-24 22:15:51.928760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.928957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.928984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.789 qpair failed and we were unable to recover it. 00:24:09.789 [2024-04-24 22:15:51.929129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.929296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.929323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.789 qpair failed and we were unable to recover it. 00:24:09.789 [2024-04-24 22:15:51.929490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.929628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.929655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.789 qpair failed and we were unable to recover it. 00:24:09.789 [2024-04-24 22:15:51.929832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.930010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.930037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.789 qpair failed and we were unable to recover it. 00:24:09.789 [2024-04-24 22:15:51.930220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.930376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.789 [2024-04-24 22:15:51.930412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.789 qpair failed and we were unable to recover it. 00:24:09.790 [2024-04-24 22:15:51.930552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.930746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.930772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.790 qpair failed and we were unable to recover it. 00:24:09.790 [2024-04-24 22:15:51.930981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.931106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.931133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.790 qpair failed and we were unable to recover it. 00:24:09.790 [2024-04-24 22:15:51.931343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.931522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.931551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.790 qpair failed and we were unable to recover it. 00:24:09.790 [2024-04-24 22:15:51.931764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.931893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.931920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.790 qpair failed and we were unable to recover it. 00:24:09.790 [2024-04-24 22:15:51.932088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.932287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.932313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.790 qpair failed and we were unable to recover it. 00:24:09.790 [2024-04-24 22:15:51.932548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.932703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.932730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.790 qpair failed and we were unable to recover it. 00:24:09.790 [2024-04-24 22:15:51.932891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.933091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.933118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.790 qpair failed and we were unable to recover it. 00:24:09.790 [2024-04-24 22:15:51.933320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.933525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.933553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.790 qpair failed and we were unable to recover it. 00:24:09.790 [2024-04-24 22:15:51.933808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.934039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.934066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.790 qpair failed and we were unable to recover it. 00:24:09.790 [2024-04-24 22:15:51.934229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.934408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.934439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.790 qpair failed and we were unable to recover it. 00:24:09.790 [2024-04-24 22:15:51.934599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.934770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.934796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.790 qpair failed and we were unable to recover it. 00:24:09.790 [2024-04-24 22:15:51.934994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.935192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.935219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.790 qpair failed and we were unable to recover it. 00:24:09.790 [2024-04-24 22:15:51.935471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.935648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.935676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.790 qpair failed and we were unable to recover it. 00:24:09.790 [2024-04-24 22:15:51.935885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.936082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.936108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.790 qpair failed and we were unable to recover it. 00:24:09.790 [2024-04-24 22:15:51.936284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.936460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.936489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.790 qpair failed and we were unable to recover it. 00:24:09.790 [2024-04-24 22:15:51.936645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.936778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.936805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.790 qpair failed and we were unable to recover it. 00:24:09.790 [2024-04-24 22:15:51.936977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.937147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.937174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.790 qpair failed and we were unable to recover it. 00:24:09.790 [2024-04-24 22:15:51.937351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.937523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.937551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.790 qpair failed and we were unable to recover it. 00:24:09.790 [2024-04-24 22:15:51.937758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.937925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.937952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.790 qpair failed and we were unable to recover it. 00:24:09.790 [2024-04-24 22:15:51.938080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.938276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.938303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.790 qpair failed and we were unable to recover it. 00:24:09.790 [2024-04-24 22:15:51.938473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.938671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.938698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.790 qpair failed and we were unable to recover it. 00:24:09.790 [2024-04-24 22:15:51.938872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.939041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.939068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.790 qpair failed and we were unable to recover it. 00:24:09.790 [2024-04-24 22:15:51.939266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.939389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.939429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.790 qpair failed and we were unable to recover it. 00:24:09.790 [2024-04-24 22:15:51.939630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.939800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.939827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.790 qpair failed and we were unable to recover it. 00:24:09.790 [2024-04-24 22:15:51.940041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.940219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.940246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.790 qpair failed and we were unable to recover it. 00:24:09.790 [2024-04-24 22:15:51.940456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.940655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.940682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.790 qpair failed and we were unable to recover it. 00:24:09.790 [2024-04-24 22:15:51.940854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.941050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.941076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.790 qpair failed and we were unable to recover it. 00:24:09.790 [2024-04-24 22:15:51.941298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.941488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.941515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.790 qpair failed and we were unable to recover it. 00:24:09.790 [2024-04-24 22:15:51.941706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.941889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.790 [2024-04-24 22:15:51.941916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.790 qpair failed and we were unable to recover it. 00:24:09.791 [2024-04-24 22:15:51.942099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.942271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.942298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.791 qpair failed and we were unable to recover it. 00:24:09.791 [2024-04-24 22:15:51.942507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.942671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.942698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.791 qpair failed and we were unable to recover it. 00:24:09.791 [2024-04-24 22:15:51.942882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.943071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.943097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.791 qpair failed and we were unable to recover it. 00:24:09.791 [2024-04-24 22:15:51.943301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.943501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.943533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.791 qpair failed and we were unable to recover it. 00:24:09.791 [2024-04-24 22:15:51.943678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.943875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.943902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.791 qpair failed and we were unable to recover it. 00:24:09.791 [2024-04-24 22:15:51.944102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.944262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.944290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.791 qpair failed and we were unable to recover it. 00:24:09.791 [2024-04-24 22:15:51.944461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.944666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.944693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.791 qpair failed and we were unable to recover it. 00:24:09.791 [2024-04-24 22:15:51.944877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.945038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.945065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.791 qpair failed and we were unable to recover it. 00:24:09.791 [2024-04-24 22:15:51.945276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.945407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.945435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.791 qpair failed and we were unable to recover it. 00:24:09.791 [2024-04-24 22:15:51.945611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.945772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.945799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.791 qpair failed and we were unable to recover it. 00:24:09.791 [2024-04-24 22:15:51.945970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.946133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.946160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.791 qpair failed and we were unable to recover it. 00:24:09.791 [2024-04-24 22:15:51.946365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.946549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.946577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.791 qpair failed and we were unable to recover it. 00:24:09.791 [2024-04-24 22:15:51.946766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.946952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.946979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.791 qpair failed and we were unable to recover it. 00:24:09.791 [2024-04-24 22:15:51.947191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.947372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.947411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.791 qpair failed and we were unable to recover it. 00:24:09.791 [2024-04-24 22:15:51.947568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.947703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.947730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.791 qpair failed and we were unable to recover it. 00:24:09.791 [2024-04-24 22:15:51.947945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.948146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.948173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.791 qpair failed and we were unable to recover it. 00:24:09.791 [2024-04-24 22:15:51.948382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.948600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.948627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.791 qpair failed and we were unable to recover it. 00:24:09.791 [2024-04-24 22:15:51.948877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.949067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.949094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.791 qpair failed and we were unable to recover it. 00:24:09.791 [2024-04-24 22:15:51.949258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.949468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.949497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.791 qpair failed and we were unable to recover it. 00:24:09.791 [2024-04-24 22:15:51.949697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.949861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.949889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.791 qpair failed and we were unable to recover it. 00:24:09.791 [2024-04-24 22:15:51.950057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.950218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.950245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.791 qpair failed and we were unable to recover it. 00:24:09.791 [2024-04-24 22:15:51.950410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.950576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.950603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.791 qpair failed and we were unable to recover it. 00:24:09.791 [2024-04-24 22:15:51.950774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.951025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.951052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.791 qpair failed and we were unable to recover it. 00:24:09.791 [2024-04-24 22:15:51.951301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.951507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.951540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.791 qpair failed and we were unable to recover it. 00:24:09.791 [2024-04-24 22:15:51.951731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.951860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.951886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.791 qpair failed and we were unable to recover it. 00:24:09.791 [2024-04-24 22:15:51.952080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.952210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.952237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.791 qpair failed and we were unable to recover it. 00:24:09.791 [2024-04-24 22:15:51.952411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.952554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.952581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.791 qpair failed and we were unable to recover it. 00:24:09.791 [2024-04-24 22:15:51.952724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.952898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.952925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.791 qpair failed and we were unable to recover it. 00:24:09.791 [2024-04-24 22:15:51.953113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.791 [2024-04-24 22:15:51.953303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.953330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.792 qpair failed and we were unable to recover it. 00:24:09.792 [2024-04-24 22:15:51.953499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.953677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.953704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.792 qpair failed and we were unable to recover it. 00:24:09.792 [2024-04-24 22:15:51.953847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.954017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.954043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.792 qpair failed and we were unable to recover it. 00:24:09.792 [2024-04-24 22:15:51.954228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.954358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.954385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.792 qpair failed and we were unable to recover it. 00:24:09.792 [2024-04-24 22:15:51.954608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.954794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.954821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.792 qpair failed and we were unable to recover it. 00:24:09.792 [2024-04-24 22:15:51.955007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.955218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.955245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.792 qpair failed and we were unable to recover it. 00:24:09.792 [2024-04-24 22:15:51.955434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.955606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.955633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.792 qpair failed and we were unable to recover it. 00:24:09.792 [2024-04-24 22:15:51.955802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.955939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.955966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.792 qpair failed and we were unable to recover it. 00:24:09.792 [2024-04-24 22:15:51.956148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.956340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.956367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.792 qpair failed and we were unable to recover it. 00:24:09.792 [2024-04-24 22:15:51.956574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.956782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.956809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.792 qpair failed and we were unable to recover it. 00:24:09.792 [2024-04-24 22:15:51.957017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.957194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.957222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.792 qpair failed and we were unable to recover it. 00:24:09.792 [2024-04-24 22:15:51.957379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.957593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.957621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.792 qpair failed and we were unable to recover it. 00:24:09.792 [2024-04-24 22:15:51.957820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.957982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.958009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.792 qpair failed and we were unable to recover it. 00:24:09.792 [2024-04-24 22:15:51.958147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.958302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.958329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.792 qpair failed and we were unable to recover it. 00:24:09.792 [2024-04-24 22:15:51.958463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.958616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.958644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.792 qpair failed and we were unable to recover it. 00:24:09.792 [2024-04-24 22:15:51.958780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.958911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.958938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.792 qpair failed and we were unable to recover it. 00:24:09.792 [2024-04-24 22:15:51.959118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.959320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.959347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.792 qpair failed and we were unable to recover it. 00:24:09.792 [2024-04-24 22:15:51.959527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.959669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.959697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.792 qpair failed and we were unable to recover it. 00:24:09.792 [2024-04-24 22:15:51.959893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.960098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.960125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.792 qpair failed and we were unable to recover it. 00:24:09.792 [2024-04-24 22:15:51.960313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.960517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.960545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.792 qpair failed and we were unable to recover it. 00:24:09.792 [2024-04-24 22:15:51.960721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.960853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.960880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.792 qpair failed and we were unable to recover it. 00:24:09.792 [2024-04-24 22:15:51.961077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.961242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.961269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.792 qpair failed and we were unable to recover it. 00:24:09.792 [2024-04-24 22:15:51.961440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.961625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.961652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.792 qpair failed and we were unable to recover it. 00:24:09.792 [2024-04-24 22:15:51.961863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.962031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.962059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.792 qpair failed and we were unable to recover it. 00:24:09.792 [2024-04-24 22:15:51.962253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.792 [2024-04-24 22:15:51.962427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.962455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.793 qpair failed and we were unable to recover it. 00:24:09.793 [2024-04-24 22:15:51.962618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.962760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.962787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.793 qpair failed and we were unable to recover it. 00:24:09.793 [2024-04-24 22:15:51.962932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.963131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.963159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.793 qpair failed and we were unable to recover it. 00:24:09.793 [2024-04-24 22:15:51.963346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.963524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.963553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.793 qpair failed and we were unable to recover it. 00:24:09.793 [2024-04-24 22:15:51.963682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.963831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.963858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.793 qpair failed and we were unable to recover it. 00:24:09.793 [2024-04-24 22:15:51.964038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.964178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.964205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.793 qpair failed and we were unable to recover it. 00:24:09.793 [2024-04-24 22:15:51.964422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.964612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.964639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.793 qpair failed and we were unable to recover it. 00:24:09.793 [2024-04-24 22:15:51.964832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.964988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.965015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.793 qpair failed and we were unable to recover it. 00:24:09.793 [2024-04-24 22:15:51.965213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.965416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.965444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.793 qpair failed and we were unable to recover it. 00:24:09.793 [2024-04-24 22:15:51.965610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.965766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.965793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.793 qpair failed and we were unable to recover it. 00:24:09.793 [2024-04-24 22:15:51.965989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.966162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.966189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.793 qpair failed and we were unable to recover it. 00:24:09.793 [2024-04-24 22:15:51.966370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.966549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.966578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.793 qpair failed and we were unable to recover it. 00:24:09.793 [2024-04-24 22:15:51.966753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.966979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.967006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.793 qpair failed and we were unable to recover it. 00:24:09.793 [2024-04-24 22:15:51.967222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.967440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.967469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.793 qpair failed and we were unable to recover it. 00:24:09.793 [2024-04-24 22:15:51.967652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.967786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.967812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.793 qpair failed and we were unable to recover it. 00:24:09.793 [2024-04-24 22:15:51.968027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.968239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.968266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.793 qpair failed and we were unable to recover it. 00:24:09.793 [2024-04-24 22:15:51.968472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.968610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.968637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.793 qpair failed and we were unable to recover it. 00:24:09.793 [2024-04-24 22:15:51.968857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.969068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.969095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.793 qpair failed and we were unable to recover it. 00:24:09.793 [2024-04-24 22:15:51.969265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.969463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.969491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.793 qpair failed and we were unable to recover it. 00:24:09.793 [2024-04-24 22:15:51.969654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.969831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.969858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.793 qpair failed and we were unable to recover it. 00:24:09.793 [2024-04-24 22:15:51.970098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.970264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.970291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.793 qpair failed and we were unable to recover it. 00:24:09.793 [2024-04-24 22:15:51.970491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.970644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.970671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.793 qpair failed and we were unable to recover it. 00:24:09.793 [2024-04-24 22:15:51.970829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.971051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.971078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.793 qpair failed and we were unable to recover it. 00:24:09.793 [2024-04-24 22:15:51.971246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.971407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.971436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.793 qpair failed and we were unable to recover it. 00:24:09.793 [2024-04-24 22:15:51.971591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.971757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.971784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.793 qpair failed and we were unable to recover it. 00:24:09.793 [2024-04-24 22:15:51.971959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.972130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.972157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.793 qpair failed and we were unable to recover it. 00:24:09.793 [2024-04-24 22:15:51.972327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.972493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.972522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.793 qpair failed and we were unable to recover it. 00:24:09.793 [2024-04-24 22:15:51.972666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.972881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.972909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.793 qpair failed and we were unable to recover it. 00:24:09.793 [2024-04-24 22:15:51.973121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.973268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.973294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.793 qpair failed and we were unable to recover it. 00:24:09.793 [2024-04-24 22:15:51.973500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.793 [2024-04-24 22:15:51.973662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.973689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.794 qpair failed and we were unable to recover it. 00:24:09.794 [2024-04-24 22:15:51.973898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.974106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.974132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.794 qpair failed and we were unable to recover it. 00:24:09.794 [2024-04-24 22:15:51.974332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.974473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.974500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.794 qpair failed and we were unable to recover it. 00:24:09.794 [2024-04-24 22:15:51.974696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.974854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.974890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.794 qpair failed and we were unable to recover it. 00:24:09.794 [2024-04-24 22:15:51.975093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.975291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.975318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.794 qpair failed and we were unable to recover it. 00:24:09.794 [2024-04-24 22:15:51.975532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.975661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.975688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.794 qpair failed and we were unable to recover it. 00:24:09.794 [2024-04-24 22:15:51.975863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.976025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.976052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.794 qpair failed and we were unable to recover it. 00:24:09.794 [2024-04-24 22:15:51.976251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.976449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.976477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.794 qpair failed and we were unable to recover it. 00:24:09.794 [2024-04-24 22:15:51.976652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.976835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.976863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.794 qpair failed and we were unable to recover it. 00:24:09.794 [2024-04-24 22:15:51.977024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.977174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.977201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.794 qpair failed and we were unable to recover it. 00:24:09.794 [2024-04-24 22:15:51.977381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.977537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.977564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.794 qpair failed and we were unable to recover it. 00:24:09.794 [2024-04-24 22:15:51.977779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.977910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.977937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.794 qpair failed and we were unable to recover it. 00:24:09.794 [2024-04-24 22:15:51.978105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.978303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.978329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.794 qpair failed and we were unable to recover it. 00:24:09.794 [2024-04-24 22:15:51.978506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.978687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.978714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.794 qpair failed and we were unable to recover it. 00:24:09.794 [2024-04-24 22:15:51.978900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.979069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.979095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.794 qpair failed and we were unable to recover it. 00:24:09.794 [2024-04-24 22:15:51.979292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.979472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.979500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.794 qpair failed and we were unable to recover it. 00:24:09.794 [2024-04-24 22:15:51.979692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.979877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.979904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.794 qpair failed and we were unable to recover it. 00:24:09.794 [2024-04-24 22:15:51.980085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.980253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.980281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.794 qpair failed and we were unable to recover it. 00:24:09.794 [2024-04-24 22:15:51.980431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.980616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.980643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.794 qpair failed and we were unable to recover it. 00:24:09.794 [2024-04-24 22:15:51.980784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.980949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.980976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.794 qpair failed and we were unable to recover it. 00:24:09.794 [2024-04-24 22:15:51.981158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.981319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.981347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.794 qpair failed and we were unable to recover it. 00:24:09.794 [2024-04-24 22:15:51.981526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.981669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.981697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.794 qpair failed and we were unable to recover it. 00:24:09.794 [2024-04-24 22:15:51.981827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.981959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.981986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.794 qpair failed and we were unable to recover it. 00:24:09.794 [2024-04-24 22:15:51.982155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.982338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.982366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.794 qpair failed and we were unable to recover it. 00:24:09.794 [2024-04-24 22:15:51.982503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.982664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.982692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.794 qpair failed and we were unable to recover it. 00:24:09.794 [2024-04-24 22:15:51.982888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.983047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.983074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.794 qpair failed and we were unable to recover it. 00:24:09.794 [2024-04-24 22:15:51.983233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.983365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.983402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.794 qpair failed and we were unable to recover it. 00:24:09.794 [2024-04-24 22:15:51.983558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.983720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.983747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.794 qpair failed and we were unable to recover it. 00:24:09.794 [2024-04-24 22:15:51.983912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.984123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.794 [2024-04-24 22:15:51.984150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.794 qpair failed and we were unable to recover it. 00:24:09.794 [2024-04-24 22:15:51.984321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.984503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.984531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.795 qpair failed and we were unable to recover it. 00:24:09.795 [2024-04-24 22:15:51.984688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.984844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.984871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.795 qpair failed and we were unable to recover it. 00:24:09.795 [2024-04-24 22:15:51.985044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.985201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.985228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.795 qpair failed and we were unable to recover it. 00:24:09.795 [2024-04-24 22:15:51.985421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.985545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.985573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.795 qpair failed and we were unable to recover it. 00:24:09.795 [2024-04-24 22:15:51.985726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.985852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.985880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.795 qpair failed and we were unable to recover it. 00:24:09.795 [2024-04-24 22:15:51.986054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.986198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.986225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.795 qpair failed and we were unable to recover it. 00:24:09.795 [2024-04-24 22:15:51.986412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.986543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.986570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.795 qpair failed and we were unable to recover it. 00:24:09.795 [2024-04-24 22:15:51.986758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.986925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.986952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.795 qpair failed and we were unable to recover it. 00:24:09.795 [2024-04-24 22:15:51.987118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.987275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.987302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.795 qpair failed and we were unable to recover it. 00:24:09.795 [2024-04-24 22:15:51.987462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.987612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.987639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.795 qpair failed and we were unable to recover it. 00:24:09.795 [2024-04-24 22:15:51.987814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.987986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.988013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.795 qpair failed and we were unable to recover it. 00:24:09.795 [2024-04-24 22:15:51.988183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.988391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.988436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.795 qpair failed and we were unable to recover it. 00:24:09.795 [2024-04-24 22:15:51.988579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.988762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.988789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.795 qpair failed and we were unable to recover it. 00:24:09.795 [2024-04-24 22:15:51.988942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.989142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.989169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.795 qpair failed and we were unable to recover it. 00:24:09.795 [2024-04-24 22:15:51.989361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.989519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.989547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.795 qpair failed and we were unable to recover it. 00:24:09.795 [2024-04-24 22:15:51.989712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.989922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.989949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.795 qpair failed and we were unable to recover it. 00:24:09.795 [2024-04-24 22:15:51.990106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.990310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.990337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.795 qpair failed and we were unable to recover it. 00:24:09.795 [2024-04-24 22:15:51.990500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.990661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.990688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.795 qpair failed and we were unable to recover it. 00:24:09.795 [2024-04-24 22:15:51.990846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.991033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.991060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.795 qpair failed and we were unable to recover it. 00:24:09.795 [2024-04-24 22:15:51.991248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.991442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.991471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.795 qpair failed and we were unable to recover it. 00:24:09.795 [2024-04-24 22:15:51.991629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.991806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.991834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.795 qpair failed and we were unable to recover it. 00:24:09.795 [2024-04-24 22:15:51.991971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.992133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.992161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.795 qpair failed and we were unable to recover it. 00:24:09.795 [2024-04-24 22:15:51.992299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.992475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.992503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.795 qpair failed and we were unable to recover it. 00:24:09.795 [2024-04-24 22:15:51.992701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.992864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.992891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.795 qpair failed and we were unable to recover it. 00:24:09.795 [2024-04-24 22:15:51.993068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.993276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.993304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.795 qpair failed and we were unable to recover it. 00:24:09.795 [2024-04-24 22:15:51.993481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.993645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.993672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.795 qpair failed and we were unable to recover it. 00:24:09.795 [2024-04-24 22:15:51.993840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.994032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.994069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.795 qpair failed and we were unable to recover it. 00:24:09.795 [2024-04-24 22:15:51.994238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.994409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.994438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.795 qpair failed and we were unable to recover it. 00:24:09.795 [2024-04-24 22:15:51.994590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.994727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.795 [2024-04-24 22:15:51.994754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.795 qpair failed and we were unable to recover it. 00:24:09.796 [2024-04-24 22:15:51.994941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:51.995099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:51.995127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.796 qpair failed and we were unable to recover it. 00:24:09.796 [2024-04-24 22:15:51.995325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:51.995522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:51.995551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.796 qpair failed and we were unable to recover it. 00:24:09.796 [2024-04-24 22:15:51.995782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:51.995916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:51.995944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.796 qpair failed and we were unable to recover it. 00:24:09.796 [2024-04-24 22:15:51.996161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:51.996329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:51.996355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.796 qpair failed and we were unable to recover it. 00:24:09.796 [2024-04-24 22:15:51.996504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:51.996674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:51.996701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.796 qpair failed and we were unable to recover it. 00:24:09.796 [2024-04-24 22:15:51.996859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:51.997006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:51.997037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.796 qpair failed and we were unable to recover it. 00:24:09.796 [2024-04-24 22:15:51.997214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:51.997349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:51.997376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.796 qpair failed and we were unable to recover it. 00:24:09.796 [2024-04-24 22:15:51.997530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:51.997658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:51.997685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.796 qpair failed and we were unable to recover it. 00:24:09.796 [2024-04-24 22:15:51.997842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:51.998024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:51.998051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.796 qpair failed and we were unable to recover it. 00:24:09.796 [2024-04-24 22:15:51.998188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:51.998385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:51.998420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.796 qpair failed and we were unable to recover it. 00:24:09.796 [2024-04-24 22:15:51.998585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:51.998777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:51.998804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.796 qpair failed and we were unable to recover it. 00:24:09.796 [2024-04-24 22:15:51.998954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:51.999133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:51.999160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.796 qpair failed and we were unable to recover it. 00:24:09.796 [2024-04-24 22:15:51.999346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:51.999491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:51.999519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.796 qpair failed and we were unable to recover it. 00:24:09.796 [2024-04-24 22:15:51.999648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:51.999839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:51.999866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.796 qpair failed and we were unable to recover it. 00:24:09.796 [2024-04-24 22:15:52.000082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:52.000276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:52.000303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.796 qpair failed and we were unable to recover it. 00:24:09.796 [2024-04-24 22:15:52.000500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:52.000631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:52.000663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.796 qpair failed and we were unable to recover it. 00:24:09.796 [2024-04-24 22:15:52.000869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:52.001043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:52.001070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.796 qpair failed and we were unable to recover it. 00:24:09.796 [2024-04-24 22:15:52.001241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:52.001454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:52.001483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.796 qpair failed and we were unable to recover it. 00:24:09.796 [2024-04-24 22:15:52.001623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:52.001807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:52.001833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.796 qpair failed and we were unable to recover it. 00:24:09.796 [2024-04-24 22:15:52.002017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:52.002149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:52.002175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.796 qpair failed and we were unable to recover it. 00:24:09.796 [2024-04-24 22:15:52.002297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:52.002470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:52.002498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.796 qpair failed and we were unable to recover it. 00:24:09.796 [2024-04-24 22:15:52.002628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:52.002803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:52.002837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.796 qpair failed and we were unable to recover it. 00:24:09.796 [2024-04-24 22:15:52.003024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:52.003150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:52.003178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.796 qpair failed and we were unable to recover it. 00:24:09.796 [2024-04-24 22:15:52.003334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:52.003459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:52.003488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.796 qpair failed and we were unable to recover it. 00:24:09.796 [2024-04-24 22:15:52.003651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:52.003810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:52.003837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.796 qpair failed and we were unable to recover it. 00:24:09.796 [2024-04-24 22:15:52.004002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:52.004192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:52.004224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.796 qpair failed and we were unable to recover it. 00:24:09.796 [2024-04-24 22:15:52.004359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:52.004495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.796 [2024-04-24 22:15:52.004523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.797 qpair failed and we were unable to recover it. 00:24:09.797 [2024-04-24 22:15:52.004700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.797 [2024-04-24 22:15:52.004869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.797 [2024-04-24 22:15:52.004896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.797 qpair failed and we were unable to recover it. 00:24:09.797 [2024-04-24 22:15:52.005101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.797 [2024-04-24 22:15:52.005308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.797 [2024-04-24 22:15:52.005335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.797 qpair failed and we were unable to recover it. 00:24:09.797 [2024-04-24 22:15:52.005489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.797 [2024-04-24 22:15:52.005619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.797 [2024-04-24 22:15:52.005646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.797 qpair failed and we were unable to recover it. 00:24:09.797 [2024-04-24 22:15:52.005842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.797 [2024-04-24 22:15:52.006048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.797 [2024-04-24 22:15:52.006075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.797 qpair failed and we were unable to recover it. 00:24:09.797 [2024-04-24 22:15:52.006249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.797 [2024-04-24 22:15:52.006417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.797 [2024-04-24 22:15:52.006446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.797 qpair failed and we were unable to recover it. 00:24:09.797 [2024-04-24 22:15:52.006611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.797 [2024-04-24 22:15:52.006803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.797 [2024-04-24 22:15:52.006830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.797 qpair failed and we were unable to recover it. 00:24:09.797 [2024-04-24 22:15:52.006968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.797 [2024-04-24 22:15:52.007188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.797 [2024-04-24 22:15:52.007216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.797 qpair failed and we were unable to recover it. 00:24:09.797 [2024-04-24 22:15:52.007350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.797 [2024-04-24 22:15:52.007497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.797 [2024-04-24 22:15:52.007525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.797 qpair failed and we were unable to recover it. 00:24:09.797 [2024-04-24 22:15:52.007696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.797 [2024-04-24 22:15:52.007873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.797 [2024-04-24 22:15:52.007905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.797 qpair failed and we were unable to recover it. 00:24:09.797 [2024-04-24 22:15:52.008044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.797 [2024-04-24 22:15:52.008218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.797 [2024-04-24 22:15:52.008246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.797 qpair failed and we were unable to recover it. 00:24:09.797 [2024-04-24 22:15:52.008445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.797 [2024-04-24 22:15:52.008601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.797 [2024-04-24 22:15:52.008629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.797 qpair failed and we were unable to recover it. 00:24:09.797 [2024-04-24 22:15:52.008812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.797 [2024-04-24 22:15:52.008979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.797 [2024-04-24 22:15:52.009006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.797 qpair failed and we were unable to recover it. 00:24:09.797 [2024-04-24 22:15:52.009173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.797 [2024-04-24 22:15:52.009375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.797 [2024-04-24 22:15:52.009413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:09.797 qpair failed and we were unable to recover it. 00:24:09.797 [2024-04-24 22:15:52.009548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.009744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.009771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.070 qpair failed and we were unable to recover it. 00:24:10.070 [2024-04-24 22:15:52.009973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.010201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.010229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.070 qpair failed and we were unable to recover it. 00:24:10.070 [2024-04-24 22:15:52.010431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.010587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.010614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.070 qpair failed and we were unable to recover it. 00:24:10.070 [2024-04-24 22:15:52.010796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.010969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.010996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.070 qpair failed and we were unable to recover it. 00:24:10.070 [2024-04-24 22:15:52.011175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.011369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.011403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.070 qpair failed and we were unable to recover it. 00:24:10.070 [2024-04-24 22:15:52.011550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.011690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.011717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.070 qpair failed and we were unable to recover it. 00:24:10.070 [2024-04-24 22:15:52.011941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.012098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.012125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.070 qpair failed and we were unable to recover it. 00:24:10.070 [2024-04-24 22:15:52.012283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.012501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.012530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.070 qpair failed and we were unable to recover it. 00:24:10.070 [2024-04-24 22:15:52.012669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.012806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.012834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.070 qpair failed and we were unable to recover it. 00:24:10.070 [2024-04-24 22:15:52.012995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.013153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.013188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.070 qpair failed and we were unable to recover it. 00:24:10.070 [2024-04-24 22:15:52.013380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.013521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.013548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.070 qpair failed and we were unable to recover it. 00:24:10.070 [2024-04-24 22:15:52.013713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.013867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.013894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.070 qpair failed and we were unable to recover it. 00:24:10.070 [2024-04-24 22:15:52.014049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.014220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.014247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.070 qpair failed and we were unable to recover it. 00:24:10.070 [2024-04-24 22:15:52.014416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.014553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.014580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.070 qpair failed and we were unable to recover it. 00:24:10.070 [2024-04-24 22:15:52.014728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.014884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.014910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.070 qpair failed and we were unable to recover it. 00:24:10.070 [2024-04-24 22:15:52.015122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.015303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.070 [2024-04-24 22:15:52.015329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.070 qpair failed and we were unable to recover it. 00:24:10.071 [2024-04-24 22:15:52.015518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.015665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.015691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.071 qpair failed and we were unable to recover it. 00:24:10.071 [2024-04-24 22:15:52.015907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.016106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.016133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.071 qpair failed and we were unable to recover it. 00:24:10.071 [2024-04-24 22:15:52.016290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.016450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.016478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.071 qpair failed and we were unable to recover it. 00:24:10.071 [2024-04-24 22:15:52.016621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.016790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.016818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.071 qpair failed and we were unable to recover it. 00:24:10.071 [2024-04-24 22:15:52.017004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.017174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.017200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.071 qpair failed and we were unable to recover it. 00:24:10.071 [2024-04-24 22:15:52.017371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.017551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.017579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.071 qpair failed and we were unable to recover it. 00:24:10.071 [2024-04-24 22:15:52.017762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.017943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.017970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.071 qpair failed and we were unable to recover it. 00:24:10.071 [2024-04-24 22:15:52.018176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.018341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.018368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.071 qpair failed and we were unable to recover it. 00:24:10.071 [2024-04-24 22:15:52.018526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.018667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.018694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.071 qpair failed and we were unable to recover it. 00:24:10.071 [2024-04-24 22:15:52.018826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.018973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.019000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.071 qpair failed and we were unable to recover it. 00:24:10.071 [2024-04-24 22:15:52.019166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.019371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.019406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.071 qpair failed and we were unable to recover it. 00:24:10.071 [2024-04-24 22:15:52.019554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.019721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.019747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.071 qpair failed and we were unable to recover it. 00:24:10.071 [2024-04-24 22:15:52.019932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.020086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.020113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.071 qpair failed and we were unable to recover it. 00:24:10.071 [2024-04-24 22:15:52.020319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.020497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.020525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.071 qpair failed and we were unable to recover it. 00:24:10.071 [2024-04-24 22:15:52.020659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.020864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.020891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.071 qpair failed and we were unable to recover it. 00:24:10.071 [2024-04-24 22:15:52.021052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.021289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.021316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.071 qpair failed and we were unable to recover it. 00:24:10.071 [2024-04-24 22:15:52.021498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.021639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.021666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.071 qpair failed and we were unable to recover it. 00:24:10.071 [2024-04-24 22:15:52.021846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.022053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.022080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.071 qpair failed and we were unable to recover it. 00:24:10.071 [2024-04-24 22:15:52.022258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.022438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.022466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.071 qpair failed and we were unable to recover it. 00:24:10.071 [2024-04-24 22:15:52.022595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.022803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.022830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.071 qpair failed and we were unable to recover it. 00:24:10.071 [2024-04-24 22:15:52.023044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.023207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.023235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.071 qpair failed and we were unable to recover it. 00:24:10.071 [2024-04-24 22:15:52.023449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.023585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.023612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.071 qpair failed and we were unable to recover it. 00:24:10.071 [2024-04-24 22:15:52.023824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.023956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.023982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.071 qpair failed and we were unable to recover it. 00:24:10.071 [2024-04-24 22:15:52.024116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.024285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.024312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.071 qpair failed and we were unable to recover it. 00:24:10.071 [2024-04-24 22:15:52.024471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.024633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.024668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.071 qpair failed and we were unable to recover it. 00:24:10.071 [2024-04-24 22:15:52.024866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.025035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.025062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.071 qpair failed and we were unable to recover it. 00:24:10.071 [2024-04-24 22:15:52.025259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.025422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.025456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.071 qpair failed and we were unable to recover it. 00:24:10.071 [2024-04-24 22:15:52.025595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.025758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.025785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.071 qpair failed and we were unable to recover it. 00:24:10.071 [2024-04-24 22:15:52.025948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.071 [2024-04-24 22:15:52.026175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.026202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.072 qpair failed and we were unable to recover it. 00:24:10.072 [2024-04-24 22:15:52.026443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.026609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.026637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.072 qpair failed and we were unable to recover it. 00:24:10.072 [2024-04-24 22:15:52.026851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.027035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.027061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.072 qpair failed and we were unable to recover it. 00:24:10.072 [2024-04-24 22:15:52.027242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.027438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.027466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.072 qpair failed and we were unable to recover it. 00:24:10.072 [2024-04-24 22:15:52.027609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.027759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.027786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.072 qpair failed and we were unable to recover it. 00:24:10.072 [2024-04-24 22:15:52.027943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.028144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.028171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.072 qpair failed and we were unable to recover it. 00:24:10.072 [2024-04-24 22:15:52.028329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.028507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.028536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.072 qpair failed and we were unable to recover it. 00:24:10.072 [2024-04-24 22:15:52.028669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.028863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.028890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.072 qpair failed and we were unable to recover it. 00:24:10.072 [2024-04-24 22:15:52.029061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.029206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.029233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.072 qpair failed and we were unable to recover it. 00:24:10.072 [2024-04-24 22:15:52.029402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.029564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.029591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.072 qpair failed and we were unable to recover it. 00:24:10.072 [2024-04-24 22:15:52.029770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.029934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.029960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.072 qpair failed and we were unable to recover it. 00:24:10.072 [2024-04-24 22:15:52.030124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.030306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.030332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.072 qpair failed and we were unable to recover it. 00:24:10.072 [2024-04-24 22:15:52.030493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.030636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.030663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.072 qpair failed and we were unable to recover it. 00:24:10.072 [2024-04-24 22:15:52.030849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.031020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.031047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.072 qpair failed and we were unable to recover it. 00:24:10.072 [2024-04-24 22:15:52.031215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.031377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.031413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.072 qpair failed and we were unable to recover it. 00:24:10.072 [2024-04-24 22:15:52.031553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.031777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.031803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.072 qpair failed and we were unable to recover it. 00:24:10.072 [2024-04-24 22:15:52.031966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.032134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.032160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.072 qpair failed and we were unable to recover it. 00:24:10.072 [2024-04-24 22:15:52.032341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.032506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.032534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.072 qpair failed and we were unable to recover it. 00:24:10.072 [2024-04-24 22:15:52.032673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.032834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.032861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.072 qpair failed and we were unable to recover it. 00:24:10.072 [2024-04-24 22:15:52.033016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.033201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.033227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.072 qpair failed and we were unable to recover it. 00:24:10.072 [2024-04-24 22:15:52.033359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.033505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.033532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.072 qpair failed and we were unable to recover it. 00:24:10.072 [2024-04-24 22:15:52.033660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.033820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.033847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.072 qpair failed and we were unable to recover it. 00:24:10.072 [2024-04-24 22:15:52.033993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.034931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.034964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.072 qpair failed and we were unable to recover it. 00:24:10.072 [2024-04-24 22:15:52.035158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.035381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.035418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.072 qpair failed and we were unable to recover it. 00:24:10.072 [2024-04-24 22:15:52.035585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.035795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.035823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.072 qpair failed and we were unable to recover it. 00:24:10.072 [2024-04-24 22:15:52.035997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.036160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.036187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.072 qpair failed and we were unable to recover it. 00:24:10.072 [2024-04-24 22:15:52.036348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.036523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.036552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.072 qpair failed and we were unable to recover it. 00:24:10.072 [2024-04-24 22:15:52.036729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.036911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.036939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.072 qpair failed and we were unable to recover it. 00:24:10.072 [2024-04-24 22:15:52.037108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.037296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.072 [2024-04-24 22:15:52.037324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.072 qpair failed and we were unable to recover it. 00:24:10.073 [2024-04-24 22:15:52.037488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.037623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.037652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.073 qpair failed and we were unable to recover it. 00:24:10.073 [2024-04-24 22:15:52.037829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.037975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.038002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.073 qpair failed and we were unable to recover it. 00:24:10.073 [2024-04-24 22:15:52.038174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.038310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.038337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.073 qpair failed and we were unable to recover it. 00:24:10.073 [2024-04-24 22:15:52.038528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.038667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.038695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.073 qpair failed and we were unable to recover it. 00:24:10.073 [2024-04-24 22:15:52.038914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.039113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.039152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.073 qpair failed and we were unable to recover it. 00:24:10.073 [2024-04-24 22:15:52.039336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.039500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.039529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.073 qpair failed and we were unable to recover it. 00:24:10.073 [2024-04-24 22:15:52.039667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.039830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.039857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.073 qpair failed and we were unable to recover it. 00:24:10.073 [2024-04-24 22:15:52.040062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.040236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.040264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.073 qpair failed and we were unable to recover it. 00:24:10.073 [2024-04-24 22:15:52.040422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.040581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.040609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.073 qpair failed and we were unable to recover it. 00:24:10.073 [2024-04-24 22:15:52.040781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.040981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.041009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.073 qpair failed and we were unable to recover it. 00:24:10.073 [2024-04-24 22:15:52.041193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.041380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.041418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.073 qpair failed and we were unable to recover it. 00:24:10.073 [2024-04-24 22:15:52.041575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.041705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.041732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.073 qpair failed and we were unable to recover it. 00:24:10.073 [2024-04-24 22:15:52.041926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.042115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.042142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.073 qpair failed and we were unable to recover it. 00:24:10.073 [2024-04-24 22:15:52.042325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.042509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.042538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.073 qpair failed and we were unable to recover it. 00:24:10.073 [2024-04-24 22:15:52.042731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.042934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.042962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.073 qpair failed and we were unable to recover it. 00:24:10.073 [2024-04-24 22:15:52.043104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.043248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.043274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.073 qpair failed and we were unable to recover it. 00:24:10.073 [2024-04-24 22:15:52.043476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.043632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.043659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.073 qpair failed and we were unable to recover it. 00:24:10.073 [2024-04-24 22:15:52.043832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.044041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.044068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.073 qpair failed and we were unable to recover it. 00:24:10.073 [2024-04-24 22:15:52.044250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.044440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.044468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.073 qpair failed and we were unable to recover it. 00:24:10.073 [2024-04-24 22:15:52.044599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.044749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.044786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.073 qpair failed and we were unable to recover it. 00:24:10.073 [2024-04-24 22:15:52.044977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.045135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.045162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.073 qpair failed and we were unable to recover it. 00:24:10.073 [2024-04-24 22:15:52.045290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.045489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.045518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.073 qpair failed and we were unable to recover it. 00:24:10.073 [2024-04-24 22:15:52.045664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.045826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.045853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.073 qpair failed and we were unable to recover it. 00:24:10.073 [2024-04-24 22:15:52.046009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.046178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.046206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.073 qpair failed and we were unable to recover it. 00:24:10.073 [2024-04-24 22:15:52.046332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.046480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.046508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.073 qpair failed and we were unable to recover it. 00:24:10.073 [2024-04-24 22:15:52.046640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.046795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.046823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.073 qpair failed and we were unable to recover it. 00:24:10.073 [2024-04-24 22:15:52.046979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.047170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.047197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.073 qpair failed and we were unable to recover it. 00:24:10.073 [2024-04-24 22:15:52.047351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.047494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.073 [2024-04-24 22:15:52.047522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.073 qpair failed and we were unable to recover it. 00:24:10.073 [2024-04-24 22:15:52.047663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.047822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.047849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.074 qpair failed and we were unable to recover it. 00:24:10.074 [2024-04-24 22:15:52.048030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.048156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.048184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.074 qpair failed and we were unable to recover it. 00:24:10.074 [2024-04-24 22:15:52.048370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.048548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.048576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.074 qpair failed and we were unable to recover it. 00:24:10.074 [2024-04-24 22:15:52.048732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.048894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.048922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.074 qpair failed and we were unable to recover it. 00:24:10.074 [2024-04-24 22:15:52.049109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.049294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.049321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.074 qpair failed and we were unable to recover it. 00:24:10.074 [2024-04-24 22:15:52.049508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.049711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.049739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.074 qpair failed and we were unable to recover it. 00:24:10.074 [2024-04-24 22:15:52.049888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.050023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.050050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.074 qpair failed and we were unable to recover it. 00:24:10.074 [2024-04-24 22:15:52.050192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.050333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.050361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.074 qpair failed and we were unable to recover it. 00:24:10.074 [2024-04-24 22:15:52.050521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.050677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.050704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.074 qpair failed and we were unable to recover it. 00:24:10.074 [2024-04-24 22:15:52.050856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.051038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.051066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.074 qpair failed and we were unable to recover it. 00:24:10.074 [2024-04-24 22:15:52.051225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.051351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.051378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.074 qpair failed and we were unable to recover it. 00:24:10.074 [2024-04-24 22:15:52.051555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.051713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.051741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.074 qpair failed and we were unable to recover it. 00:24:10.074 [2024-04-24 22:15:52.051923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.052078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.052105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.074 qpair failed and we were unable to recover it. 00:24:10.074 [2024-04-24 22:15:52.052262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.052450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.052479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.074 qpair failed and we were unable to recover it. 00:24:10.074 [2024-04-24 22:15:52.052602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.052785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.052812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.074 qpair failed and we were unable to recover it. 00:24:10.074 [2024-04-24 22:15:52.052938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.053113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.053141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.074 qpair failed and we were unable to recover it. 00:24:10.074 [2024-04-24 22:15:52.053302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.053465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.053493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.074 qpair failed and we were unable to recover it. 00:24:10.074 [2024-04-24 22:15:52.053622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.053752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.053779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.074 qpair failed and we were unable to recover it. 00:24:10.074 [2024-04-24 22:15:52.053952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.054078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.054105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.074 qpair failed and we were unable to recover it. 00:24:10.074 [2024-04-24 22:15:52.054233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.054424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.054453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.074 qpair failed and we were unable to recover it. 00:24:10.074 [2024-04-24 22:15:52.054577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.054766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.054793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.074 qpair failed and we were unable to recover it. 00:24:10.074 [2024-04-24 22:15:52.054955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.055076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.055103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.074 qpair failed and we were unable to recover it. 00:24:10.074 [2024-04-24 22:15:52.055233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.055387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.055422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.074 qpair failed and we were unable to recover it. 00:24:10.074 [2024-04-24 22:15:52.055565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.055691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.055718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.074 qpair failed and we were unable to recover it. 00:24:10.074 [2024-04-24 22:15:52.055848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.055970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.055997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.074 qpair failed and we were unable to recover it. 00:24:10.074 [2024-04-24 22:15:52.056183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.056342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.056374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.074 qpair failed and we were unable to recover it. 00:24:10.074 [2024-04-24 22:15:52.056518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.056680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.074 [2024-04-24 22:15:52.056707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.075 qpair failed and we were unable to recover it. 00:24:10.075 [2024-04-24 22:15:52.056871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.057031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.057059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.075 qpair failed and we were unable to recover it. 00:24:10.075 [2024-04-24 22:15:52.057213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.057370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.057416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.075 qpair failed and we were unable to recover it. 00:24:10.075 [2024-04-24 22:15:52.057556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.057689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.057716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.075 qpair failed and we were unable to recover it. 00:24:10.075 [2024-04-24 22:15:52.057849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.058040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.058068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.075 qpair failed and we were unable to recover it. 00:24:10.075 [2024-04-24 22:15:52.058230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.058414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.058444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.075 qpair failed and we were unable to recover it. 00:24:10.075 [2024-04-24 22:15:52.058569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.058754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.058781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.075 qpair failed and we were unable to recover it. 00:24:10.075 [2024-04-24 22:15:52.058936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.059102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.059130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.075 qpair failed and we were unable to recover it. 00:24:10.075 [2024-04-24 22:15:52.059289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.059419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.059448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.075 qpair failed and we were unable to recover it. 00:24:10.075 [2024-04-24 22:15:52.059565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.059702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.059734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.075 qpair failed and we were unable to recover it. 00:24:10.075 [2024-04-24 22:15:52.059902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.060029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.060057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.075 qpair failed and we were unable to recover it. 00:24:10.075 [2024-04-24 22:15:52.060178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.060335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.060363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.075 qpair failed and we were unable to recover it. 00:24:10.075 [2024-04-24 22:15:52.060510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.060637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.060665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.075 qpair failed and we were unable to recover it. 00:24:10.075 [2024-04-24 22:15:52.060811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.060966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.060994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.075 qpair failed and we were unable to recover it. 00:24:10.075 [2024-04-24 22:15:52.061150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.061337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.061364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.075 qpair failed and we were unable to recover it. 00:24:10.075 [2024-04-24 22:15:52.061508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.061641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.061669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.075 qpair failed and we were unable to recover it. 00:24:10.075 [2024-04-24 22:15:52.061821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.061959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.061986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.075 qpair failed and we were unable to recover it. 00:24:10.075 [2024-04-24 22:15:52.062138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.062268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.062295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.075 qpair failed and we were unable to recover it. 00:24:10.075 [2024-04-24 22:15:52.062431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.062558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.062585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.075 qpair failed and we were unable to recover it. 00:24:10.075 [2024-04-24 22:15:52.062771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.062924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.062956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.075 qpair failed and we were unable to recover it. 00:24:10.075 [2024-04-24 22:15:52.063117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.063270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.063297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.075 qpair failed and we were unable to recover it. 00:24:10.075 [2024-04-24 22:15:52.063459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.063622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.063650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.075 qpair failed and we were unable to recover it. 00:24:10.075 [2024-04-24 22:15:52.063777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.063925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.063952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.075 qpair failed and we were unable to recover it. 00:24:10.075 [2024-04-24 22:15:52.064105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.064265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.064292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.075 qpair failed and we were unable to recover it. 00:24:10.075 [2024-04-24 22:15:52.064452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.064590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.064618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.075 qpair failed and we were unable to recover it. 00:24:10.075 [2024-04-24 22:15:52.064750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.064939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.075 [2024-04-24 22:15:52.064967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.075 qpair failed and we were unable to recover it. 00:24:10.075 [2024-04-24 22:15:52.065112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.065269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.065296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.076 qpair failed and we were unable to recover it. 00:24:10.076 [2024-04-24 22:15:52.065457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.065600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.065627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.076 qpair failed and we were unable to recover it. 00:24:10.076 [2024-04-24 22:15:52.065785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.065951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.065978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.076 qpair failed and we were unable to recover it. 00:24:10.076 [2024-04-24 22:15:52.066163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.066293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.066325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.076 qpair failed and we were unable to recover it. 00:24:10.076 [2024-04-24 22:15:52.066453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.066590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.066618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.076 qpair failed and we were unable to recover it. 00:24:10.076 [2024-04-24 22:15:52.066784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.066936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.066964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.076 qpair failed and we were unable to recover it. 00:24:10.076 [2024-04-24 22:15:52.067094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.067244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.067271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.076 qpair failed and we were unable to recover it. 00:24:10.076 [2024-04-24 22:15:52.067455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.067586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.067613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.076 qpair failed and we were unable to recover it. 00:24:10.076 [2024-04-24 22:15:52.067772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.067922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.067950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.076 qpair failed and we were unable to recover it. 00:24:10.076 [2024-04-24 22:15:52.068107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.068263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.068290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.076 qpair failed and we were unable to recover it. 00:24:10.076 [2024-04-24 22:15:52.068413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.068534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.068562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.076 qpair failed and we were unable to recover it. 00:24:10.076 [2024-04-24 22:15:52.068699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.068863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.068891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.076 qpair failed and we were unable to recover it. 00:24:10.076 [2024-04-24 22:15:52.069047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.069179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.069205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.076 qpair failed and we were unable to recover it. 00:24:10.076 [2024-04-24 22:15:52.069332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.069483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.069511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.076 qpair failed and we were unable to recover it. 00:24:10.076 [2024-04-24 22:15:52.069651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.069809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.069836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.076 qpair failed and we were unable to recover it. 00:24:10.076 [2024-04-24 22:15:52.069998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.070160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.070187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.076 qpair failed and we were unable to recover it. 00:24:10.076 [2024-04-24 22:15:52.070340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.070461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.070490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.076 qpair failed and we were unable to recover it. 00:24:10.076 [2024-04-24 22:15:52.070632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.070759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.070786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.076 qpair failed and we were unable to recover it. 00:24:10.076 [2024-04-24 22:15:52.070910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.071088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.071115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.076 qpair failed and we were unable to recover it. 00:24:10.076 [2024-04-24 22:15:52.071269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.071435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.071463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.076 qpair failed and we were unable to recover it. 00:24:10.076 [2024-04-24 22:15:52.071590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.071775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.071802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.076 qpair failed and we were unable to recover it. 00:24:10.076 [2024-04-24 22:15:52.071959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.072114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.072141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.076 qpair failed and we were unable to recover it. 00:24:10.076 [2024-04-24 22:15:52.072307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.072474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.072502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.076 qpair failed and we were unable to recover it. 00:24:10.076 [2024-04-24 22:15:52.072635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.072793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.072820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.076 qpair failed and we were unable to recover it. 00:24:10.076 [2024-04-24 22:15:52.072982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.073166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.073193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.076 qpair failed and we were unable to recover it. 00:24:10.076 [2024-04-24 22:15:52.073328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.073476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.073505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.076 qpair failed and we were unable to recover it. 00:24:10.076 [2024-04-24 22:15:52.073666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.073835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.073863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.076 qpair failed and we were unable to recover it. 00:24:10.076 [2024-04-24 22:15:52.074020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.074145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.074172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.076 qpair failed and we were unable to recover it. 00:24:10.076 [2024-04-24 22:15:52.074329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.074465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.076 [2024-04-24 22:15:52.074493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.076 qpair failed and we were unable to recover it. 00:24:10.077 [2024-04-24 22:15:52.074614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.074748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.074775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.077 qpair failed and we were unable to recover it. 00:24:10.077 [2024-04-24 22:15:52.074932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.075062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.075089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.077 qpair failed and we were unable to recover it. 00:24:10.077 [2024-04-24 22:15:52.075238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.075375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.075410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.077 qpair failed and we were unable to recover it. 00:24:10.077 [2024-04-24 22:15:52.075548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.075676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.075704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.077 qpair failed and we were unable to recover it. 00:24:10.077 [2024-04-24 22:15:52.075892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.076014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.076041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.077 qpair failed and we were unable to recover it. 00:24:10.077 [2024-04-24 22:15:52.076231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.076381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.076416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.077 qpair failed and we were unable to recover it. 00:24:10.077 [2024-04-24 22:15:52.076546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.076705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.076733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.077 qpair failed and we were unable to recover it. 00:24:10.077 [2024-04-24 22:15:52.076855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.077012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.077040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.077 qpair failed and we were unable to recover it. 00:24:10.077 [2024-04-24 22:15:52.077222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.077410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.077438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.077 qpair failed and we were unable to recover it. 00:24:10.077 [2024-04-24 22:15:52.077557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.077695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.077723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.077 qpair failed and we were unable to recover it. 00:24:10.077 [2024-04-24 22:15:52.077906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.078065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.078093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.077 qpair failed and we were unable to recover it. 00:24:10.077 [2024-04-24 22:15:52.078262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.078439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.078474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.077 qpair failed and we were unable to recover it. 00:24:10.077 [2024-04-24 22:15:52.078609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.078781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.078808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.077 qpair failed and we were unable to recover it. 00:24:10.077 [2024-04-24 22:15:52.079020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.079151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.079178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.077 qpair failed and we were unable to recover it. 00:24:10.077 [2024-04-24 22:15:52.079366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.079514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.079542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.077 qpair failed and we were unable to recover it. 00:24:10.077 [2024-04-24 22:15:52.079736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.079925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.079953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.077 qpair failed and we were unable to recover it. 00:24:10.077 [2024-04-24 22:15:52.080122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.080279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.080306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.077 qpair failed and we were unable to recover it. 00:24:10.077 [2024-04-24 22:15:52.080452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.080578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.080605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.077 qpair failed and we were unable to recover it. 00:24:10.077 [2024-04-24 22:15:52.080788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.080953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.080979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.077 qpair failed and we were unable to recover it. 00:24:10.077 [2024-04-24 22:15:52.081175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.081360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.081387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.077 qpair failed and we were unable to recover it. 00:24:10.077 [2024-04-24 22:15:52.081545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.081726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.081753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.077 qpair failed and we were unable to recover it. 00:24:10.077 [2024-04-24 22:15:52.081955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.082165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.082203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.077 qpair failed and we were unable to recover it. 00:24:10.077 [2024-04-24 22:15:52.082415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.082546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.082574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.077 qpair failed and we were unable to recover it. 00:24:10.077 [2024-04-24 22:15:52.082738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.082893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.082920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.077 qpair failed and we were unable to recover it. 00:24:10.077 [2024-04-24 22:15:52.083126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.083315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.083351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.077 qpair failed and we were unable to recover it. 00:24:10.077 [2024-04-24 22:15:52.083532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.083726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.083753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.077 qpair failed and we were unable to recover it. 00:24:10.077 [2024-04-24 22:15:52.083920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.084085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.084112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.077 qpair failed and we were unable to recover it. 00:24:10.077 [2024-04-24 22:15:52.084265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.084444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.084472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.077 qpair failed and we were unable to recover it. 00:24:10.077 [2024-04-24 22:15:52.084610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.077 [2024-04-24 22:15:52.084742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.084769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.078 qpair failed and we were unable to recover it. 00:24:10.078 [2024-04-24 22:15:52.084984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.085124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.085151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.078 qpair failed and we were unable to recover it. 00:24:10.078 [2024-04-24 22:15:52.085327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.085499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.085527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.078 qpair failed and we were unable to recover it. 00:24:10.078 [2024-04-24 22:15:52.085669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.085848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.085875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.078 qpair failed and we were unable to recover it. 00:24:10.078 [2024-04-24 22:15:52.086036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.086169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.086196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.078 qpair failed and we were unable to recover it. 00:24:10.078 [2024-04-24 22:15:52.086338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.086498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.086527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.078 qpair failed and we were unable to recover it. 00:24:10.078 [2024-04-24 22:15:52.086649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.086838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.086865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.078 qpair failed and we were unable to recover it. 00:24:10.078 [2024-04-24 22:15:52.086991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.087183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.087211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.078 qpair failed and we were unable to recover it. 00:24:10.078 [2024-04-24 22:15:52.087354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.087498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.087526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.078 qpair failed and we were unable to recover it. 00:24:10.078 [2024-04-24 22:15:52.087699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.087878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.087906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.078 qpair failed and we were unable to recover it. 00:24:10.078 [2024-04-24 22:15:52.088053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.088192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.088219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.078 qpair failed and we were unable to recover it. 00:24:10.078 [2024-04-24 22:15:52.088358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.088520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.088548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.078 qpair failed and we were unable to recover it. 00:24:10.078 [2024-04-24 22:15:52.088695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.088840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.088878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.078 qpair failed and we were unable to recover it. 00:24:10.078 [2024-04-24 22:15:52.089098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.089242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.089269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.078 qpair failed and we were unable to recover it. 00:24:10.078 [2024-04-24 22:15:52.089418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.089550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.089577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.078 qpair failed and we were unable to recover it. 00:24:10.078 [2024-04-24 22:15:52.089741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.089904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.089931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.078 qpair failed and we were unable to recover it. 00:24:10.078 [2024-04-24 22:15:52.090141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.090334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.090361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.078 qpair failed and we were unable to recover it. 00:24:10.078 [2024-04-24 22:15:52.090516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.090654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.090682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.078 qpair failed and we were unable to recover it. 00:24:10.078 [2024-04-24 22:15:52.090870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.090992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.091020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.078 qpair failed and we were unable to recover it. 00:24:10.078 [2024-04-24 22:15:52.091208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.091370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.091406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.078 qpair failed and we were unable to recover it. 00:24:10.078 [2024-04-24 22:15:52.091543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.091663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.091690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.078 qpair failed and we were unable to recover it. 00:24:10.078 [2024-04-24 22:15:52.091819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.091987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.092014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.078 qpair failed and we were unable to recover it. 00:24:10.078 [2024-04-24 22:15:52.092176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.092331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.092358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.078 qpair failed and we were unable to recover it. 00:24:10.078 [2024-04-24 22:15:52.092515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.092683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.092710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.078 qpair failed and we were unable to recover it. 00:24:10.078 [2024-04-24 22:15:52.092891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.093082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.093109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.078 qpair failed and we were unable to recover it. 00:24:10.078 [2024-04-24 22:15:52.093281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.093465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.093493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.078 qpair failed and we were unable to recover it. 00:24:10.078 [2024-04-24 22:15:52.093628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.093786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.093814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.078 qpair failed and we were unable to recover it. 00:24:10.078 [2024-04-24 22:15:52.094016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.094213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.094240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.078 qpair failed and we were unable to recover it. 00:24:10.078 [2024-04-24 22:15:52.094388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.094555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.094592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.078 qpair failed and we were unable to recover it. 00:24:10.078 [2024-04-24 22:15:52.094781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.078 [2024-04-24 22:15:52.094948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.094975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.079 qpair failed and we were unable to recover it. 00:24:10.079 [2024-04-24 22:15:52.095163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.095288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.095315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.079 qpair failed and we were unable to recover it. 00:24:10.079 [2024-04-24 22:15:52.095483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.095648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.095676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.079 qpair failed and we were unable to recover it. 00:24:10.079 [2024-04-24 22:15:52.095835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.096001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.096029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.079 qpair failed and we were unable to recover it. 00:24:10.079 [2024-04-24 22:15:52.096160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.096368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.096403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.079 qpair failed and we were unable to recover it. 00:24:10.079 [2024-04-24 22:15:52.096565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.096695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.096722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.079 qpair failed and we were unable to recover it. 00:24:10.079 [2024-04-24 22:15:52.096905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.097071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.097098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.079 qpair failed and we were unable to recover it. 00:24:10.079 [2024-04-24 22:15:52.097292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.097470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.097499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.079 qpair failed and we were unable to recover it. 00:24:10.079 [2024-04-24 22:15:52.097630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.097823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.097851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.079 qpair failed and we were unable to recover it. 00:24:10.079 [2024-04-24 22:15:52.098004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.098165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.098192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.079 qpair failed and we were unable to recover it. 00:24:10.079 [2024-04-24 22:15:52.098371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.098514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.098541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.079 qpair failed and we were unable to recover it. 00:24:10.079 [2024-04-24 22:15:52.098717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.098852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.098879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.079 qpair failed and we were unable to recover it. 00:24:10.079 [2024-04-24 22:15:52.099066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.099249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.099276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.079 qpair failed and we were unable to recover it. 00:24:10.079 [2024-04-24 22:15:52.099457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.099596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.099624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.079 qpair failed and we were unable to recover it. 00:24:10.079 [2024-04-24 22:15:52.099847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.100044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.100072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.079 qpair failed and we were unable to recover it. 00:24:10.079 [2024-04-24 22:15:52.100253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.100419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.100446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.079 qpair failed and we were unable to recover it. 00:24:10.079 [2024-04-24 22:15:52.100580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.100771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.100798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.079 qpair failed and we were unable to recover it. 00:24:10.079 [2024-04-24 22:15:52.101010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.101192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.101219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.079 qpair failed and we were unable to recover it. 00:24:10.079 [2024-04-24 22:15:52.101438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.101577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.101605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.079 qpair failed and we were unable to recover it. 00:24:10.079 [2024-04-24 22:15:52.101815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.101998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.102025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.079 qpair failed and we were unable to recover it. 00:24:10.079 [2024-04-24 22:15:52.102209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.102362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.102389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.079 qpair failed and we were unable to recover it. 00:24:10.079 [2024-04-24 22:15:52.102547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.102770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.102797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.079 qpair failed and we were unable to recover it. 00:24:10.079 [2024-04-24 22:15:52.102981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.103181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.103208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.079 qpair failed and we were unable to recover it. 00:24:10.079 [2024-04-24 22:15:52.103355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.103533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.103561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.079 qpair failed and we were unable to recover it. 00:24:10.079 [2024-04-24 22:15:52.103711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.079 [2024-04-24 22:15:52.103879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.103906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.080 qpair failed and we were unable to recover it. 00:24:10.080 [2024-04-24 22:15:52.104084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.104270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.104297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.080 qpair failed and we were unable to recover it. 00:24:10.080 [2024-04-24 22:15:52.104452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.104609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.104636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.080 qpair failed and we were unable to recover it. 00:24:10.080 [2024-04-24 22:15:52.104832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.105017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.105044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.080 qpair failed and we were unable to recover it. 00:24:10.080 [2024-04-24 22:15:52.105240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.105384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.105421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.080 qpair failed and we were unable to recover it. 00:24:10.080 [2024-04-24 22:15:52.105572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.105727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.105754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.080 qpair failed and we were unable to recover it. 00:24:10.080 [2024-04-24 22:15:52.105927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.106085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.106112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.080 qpair failed and we were unable to recover it. 00:24:10.080 [2024-04-24 22:15:52.106274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.106412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.106440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.080 qpair failed and we were unable to recover it. 00:24:10.080 [2024-04-24 22:15:52.106574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.106770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.106798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.080 qpair failed and we were unable to recover it. 00:24:10.080 [2024-04-24 22:15:52.106997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.107171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.107199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.080 qpair failed and we were unable to recover it. 00:24:10.080 [2024-04-24 22:15:52.107428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.107557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.107585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.080 qpair failed and we were unable to recover it. 00:24:10.080 [2024-04-24 22:15:52.107758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.107928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.107955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.080 qpair failed and we were unable to recover it. 00:24:10.080 [2024-04-24 22:15:52.108117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.108289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.108316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.080 qpair failed and we were unable to recover it. 00:24:10.080 [2024-04-24 22:15:52.108474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.108607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.108634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.080 qpair failed and we were unable to recover it. 00:24:10.080 [2024-04-24 22:15:52.108784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.108976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.109003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.080 qpair failed and we were unable to recover it. 00:24:10.080 [2024-04-24 22:15:52.109155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.109338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.109366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.080 qpair failed and we were unable to recover it. 00:24:10.080 [2024-04-24 22:15:52.109530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.109714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.109741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.080 qpair failed and we were unable to recover it. 00:24:10.080 [2024-04-24 22:15:52.109901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.110063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.110097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.080 qpair failed and we were unable to recover it. 00:24:10.080 [2024-04-24 22:15:52.110257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.110403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.110435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.080 qpair failed and we were unable to recover it. 00:24:10.080 [2024-04-24 22:15:52.110591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.110726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.110753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.080 qpair failed and we were unable to recover it. 00:24:10.080 [2024-04-24 22:15:52.110897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.111048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.111076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.080 qpair failed and we were unable to recover it. 00:24:10.080 [2024-04-24 22:15:52.111266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.111406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.111439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.080 qpair failed and we were unable to recover it. 00:24:10.080 [2024-04-24 22:15:52.111562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.111762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.111789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.080 qpair failed and we were unable to recover it. 00:24:10.080 [2024-04-24 22:15:52.111992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.112154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.112181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.080 qpair failed and we were unable to recover it. 00:24:10.080 [2024-04-24 22:15:52.112367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.112521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.112554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.080 qpair failed and we were unable to recover it. 00:24:10.080 [2024-04-24 22:15:52.112680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.112821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.112848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.080 qpair failed and we were unable to recover it. 00:24:10.080 [2024-04-24 22:15:52.113034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.113162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.113189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.080 qpair failed and we were unable to recover it. 00:24:10.080 [2024-04-24 22:15:52.113361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.113530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.113558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.080 qpair failed and we were unable to recover it. 00:24:10.080 [2024-04-24 22:15:52.113721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.113882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.080 [2024-04-24 22:15:52.113909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.080 qpair failed and we were unable to recover it. 00:24:10.080 [2024-04-24 22:15:52.114067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.114228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.114255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.081 qpair failed and we were unable to recover it. 00:24:10.081 [2024-04-24 22:15:52.114428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.114585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.114612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.081 qpair failed and we were unable to recover it. 00:24:10.081 [2024-04-24 22:15:52.114786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.114944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.114971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.081 qpair failed and we were unable to recover it. 00:24:10.081 [2024-04-24 22:15:52.115157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.115315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.115343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.081 qpair failed and we were unable to recover it. 00:24:10.081 [2024-04-24 22:15:52.115519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.115673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.115710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.081 qpair failed and we were unable to recover it. 00:24:10.081 [2024-04-24 22:15:52.115874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.116034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.116067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.081 qpair failed and we were unable to recover it. 00:24:10.081 [2024-04-24 22:15:52.116242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.116406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.116441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.081 qpair failed and we were unable to recover it. 00:24:10.081 [2024-04-24 22:15:52.116598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.116730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.116758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.081 qpair failed and we were unable to recover it. 00:24:10.081 [2024-04-24 22:15:52.116923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.117129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.117156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.081 qpair failed and we were unable to recover it. 00:24:10.081 [2024-04-24 22:15:52.117316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.117494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.117522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.081 qpair failed and we were unable to recover it. 00:24:10.081 [2024-04-24 22:15:52.117731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.117880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.117907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.081 qpair failed and we were unable to recover it. 00:24:10.081 [2024-04-24 22:15:52.118065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.118253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.118280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.081 qpair failed and we were unable to recover it. 00:24:10.081 [2024-04-24 22:15:52.118466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.118618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.118645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.081 qpair failed and we were unable to recover it. 00:24:10.081 [2024-04-24 22:15:52.118840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.118992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.119019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.081 qpair failed and we were unable to recover it. 00:24:10.081 [2024-04-24 22:15:52.119180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.119351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.119379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.081 qpair failed and we were unable to recover it. 00:24:10.081 [2024-04-24 22:15:52.119521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.119661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.119693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.081 qpair failed and we were unable to recover it. 00:24:10.081 [2024-04-24 22:15:52.119844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.120011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.120039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.081 qpair failed and we were unable to recover it. 00:24:10.081 [2024-04-24 22:15:52.120223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.120378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.120433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.081 qpair failed and we were unable to recover it. 00:24:10.081 [2024-04-24 22:15:52.120601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.120767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.120794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.081 qpair failed and we were unable to recover it. 00:24:10.081 [2024-04-24 22:15:52.120983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.121109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.121136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.081 qpair failed and we were unable to recover it. 00:24:10.081 [2024-04-24 22:15:52.121271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.121476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.121505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.081 qpair failed and we were unable to recover it. 00:24:10.081 [2024-04-24 22:15:52.121626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.121788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.121815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.081 qpair failed and we were unable to recover it. 00:24:10.081 [2024-04-24 22:15:52.122003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.122138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.122165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.081 qpair failed and we were unable to recover it. 00:24:10.081 [2024-04-24 22:15:52.122331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.122485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.122513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.081 qpair failed and we were unable to recover it. 00:24:10.081 [2024-04-24 22:15:52.122712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.122866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.122893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.081 qpair failed and we were unable to recover it. 00:24:10.081 [2024-04-24 22:15:52.123049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.123217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.123248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.081 qpair failed and we were unable to recover it. 00:24:10.081 [2024-04-24 22:15:52.123405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.123540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.123567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.081 qpair failed and we were unable to recover it. 00:24:10.081 [2024-04-24 22:15:52.123722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.123868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.123895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.081 qpair failed and we were unable to recover it. 00:24:10.081 [2024-04-24 22:15:52.124053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.124237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.081 [2024-04-24 22:15:52.124264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.081 qpair failed and we were unable to recover it. 00:24:10.082 [2024-04-24 22:15:52.124444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.124599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.124626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.082 qpair failed and we were unable to recover it. 00:24:10.082 [2024-04-24 22:15:52.124788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.124985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.125013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.082 qpair failed and we were unable to recover it. 00:24:10.082 [2024-04-24 22:15:52.125165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.125336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.125364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.082 qpair failed and we were unable to recover it. 00:24:10.082 [2024-04-24 22:15:52.125542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.125696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.125723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.082 qpair failed and we were unable to recover it. 00:24:10.082 [2024-04-24 22:15:52.125876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.126065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.126093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.082 qpair failed and we were unable to recover it. 00:24:10.082 [2024-04-24 22:15:52.126285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.126463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.126492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.082 qpair failed and we were unable to recover it. 00:24:10.082 [2024-04-24 22:15:52.126694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.126885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.126912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.082 qpair failed and we were unable to recover it. 00:24:10.082 [2024-04-24 22:15:52.127107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.127264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.127291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.082 qpair failed and we were unable to recover it. 00:24:10.082 [2024-04-24 22:15:52.127420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.127562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.127589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.082 qpair failed and we were unable to recover it. 00:24:10.082 [2024-04-24 22:15:52.127759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.127922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.127950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.082 qpair failed and we were unable to recover it. 00:24:10.082 [2024-04-24 22:15:52.128109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.128243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.128271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.082 qpair failed and we were unable to recover it. 00:24:10.082 [2024-04-24 22:15:52.128447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.128601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.128628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.082 qpair failed and we were unable to recover it. 00:24:10.082 [2024-04-24 22:15:52.128748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.128925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.128953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.082 qpair failed and we were unable to recover it. 00:24:10.082 [2024-04-24 22:15:52.129114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.129302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.129329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.082 qpair failed and we were unable to recover it. 00:24:10.082 [2024-04-24 22:15:52.129513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.129636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.129663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.082 qpair failed and we were unable to recover it. 00:24:10.082 [2024-04-24 22:15:52.129867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.130028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.130055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.082 qpair failed and we were unable to recover it. 00:24:10.082 [2024-04-24 22:15:52.130187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.130338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.130365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.082 qpair failed and we were unable to recover it. 00:24:10.082 [2024-04-24 22:15:52.130544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.130704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.130732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.082 qpair failed and we were unable to recover it. 00:24:10.082 [2024-04-24 22:15:52.130913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.131112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.131140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.082 qpair failed and we were unable to recover it. 00:24:10.082 [2024-04-24 22:15:52.131294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.131470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.131499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.082 qpair failed and we were unable to recover it. 00:24:10.082 [2024-04-24 22:15:52.131707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.131891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.131919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.082 qpair failed and we were unable to recover it. 00:24:10.082 [2024-04-24 22:15:52.132121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.132252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.132279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.082 qpair failed and we were unable to recover it. 00:24:10.082 [2024-04-24 22:15:52.132466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.132635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.132662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.082 qpair failed and we were unable to recover it. 00:24:10.082 [2024-04-24 22:15:52.132793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.132952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.132980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.082 qpair failed and we were unable to recover it. 00:24:10.082 [2024-04-24 22:15:52.133142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.133296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.133324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.082 qpair failed and we were unable to recover it. 00:24:10.082 [2024-04-24 22:15:52.133476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.133633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.133660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.082 qpair failed and we were unable to recover it. 00:24:10.082 [2024-04-24 22:15:52.133804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.133957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.133984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.082 qpair failed and we were unable to recover it. 00:24:10.082 [2024-04-24 22:15:52.134114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.134251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.134278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.082 qpair failed and we were unable to recover it. 00:24:10.082 [2024-04-24 22:15:52.134481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.082 [2024-04-24 22:15:52.134614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.134641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.083 qpair failed and we were unable to recover it. 00:24:10.083 [2024-04-24 22:15:52.134781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.134943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.134970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.083 qpair failed and we were unable to recover it. 00:24:10.083 [2024-04-24 22:15:52.135140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.135294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.135321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.083 qpair failed and we were unable to recover it. 00:24:10.083 [2024-04-24 22:15:52.135507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.135674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.135701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.083 qpair failed and we were unable to recover it. 00:24:10.083 [2024-04-24 22:15:52.135871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.136008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.136035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.083 qpair failed and we were unable to recover it. 00:24:10.083 [2024-04-24 22:15:52.136211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.136382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.136431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.083 qpair failed and we were unable to recover it. 00:24:10.083 [2024-04-24 22:15:52.136590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.136754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.136781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.083 qpair failed and we were unable to recover it. 00:24:10.083 [2024-04-24 22:15:52.136940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.137098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.137125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.083 qpair failed and we were unable to recover it. 00:24:10.083 [2024-04-24 22:15:52.137316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.137499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.137527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.083 qpair failed and we were unable to recover it. 00:24:10.083 [2024-04-24 22:15:52.137699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.137833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.137859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.083 qpair failed and we were unable to recover it. 00:24:10.083 [2024-04-24 22:15:52.138047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.138213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.138240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.083 qpair failed and we were unable to recover it. 00:24:10.083 [2024-04-24 22:15:52.138377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.138545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.138573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.083 qpair failed and we were unable to recover it. 00:24:10.083 [2024-04-24 22:15:52.138725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.138883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.138910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.083 qpair failed and we were unable to recover it. 00:24:10.083 [2024-04-24 22:15:52.139114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.139233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.139261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.083 qpair failed and we were unable to recover it. 00:24:10.083 [2024-04-24 22:15:52.139433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.139571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.139599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.083 qpair failed and we were unable to recover it. 00:24:10.083 [2024-04-24 22:15:52.139757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.139911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.139938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.083 qpair failed and we were unable to recover it. 00:24:10.083 [2024-04-24 22:15:52.140065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.140257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.140285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.083 qpair failed and we were unable to recover it. 00:24:10.083 [2024-04-24 22:15:52.140448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.140604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.140632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.083 qpair failed and we were unable to recover it. 00:24:10.083 [2024-04-24 22:15:52.140794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.140976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.141003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.083 qpair failed and we were unable to recover it. 00:24:10.083 [2024-04-24 22:15:52.141158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.141312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.141339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.083 qpair failed and we were unable to recover it. 00:24:10.083 [2024-04-24 22:15:52.141505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.141639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.141667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.083 qpair failed and we were unable to recover it. 00:24:10.083 [2024-04-24 22:15:52.141832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.142020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.142047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.083 qpair failed and we were unable to recover it. 00:24:10.083 [2024-04-24 22:15:52.142200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.142385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.142420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.083 qpair failed and we were unable to recover it. 00:24:10.083 [2024-04-24 22:15:52.142609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.142786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.142813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.083 qpair failed and we were unable to recover it. 00:24:10.083 [2024-04-24 22:15:52.143013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.143171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.143198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.083 qpair failed and we were unable to recover it. 00:24:10.083 [2024-04-24 22:15:52.143330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.143497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.143526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.083 qpair failed and we were unable to recover it. 00:24:10.083 [2024-04-24 22:15:52.143701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.143858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.143885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.083 qpair failed and we were unable to recover it. 00:24:10.083 [2024-04-24 22:15:52.144072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.144238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.144265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.083 qpair failed and we were unable to recover it. 00:24:10.083 [2024-04-24 22:15:52.144421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.144602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.144629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.083 qpair failed and we were unable to recover it. 00:24:10.083 [2024-04-24 22:15:52.144758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.083 [2024-04-24 22:15:52.144963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.144990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.084 qpair failed and we were unable to recover it. 00:24:10.084 [2024-04-24 22:15:52.145145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.145301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.145329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.084 qpair failed and we were unable to recover it. 00:24:10.084 [2024-04-24 22:15:52.145511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.145681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.145708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.084 qpair failed and we were unable to recover it. 00:24:10.084 [2024-04-24 22:15:52.145893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.146051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.146079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.084 qpair failed and we were unable to recover it. 00:24:10.084 [2024-04-24 22:15:52.146254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.146461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.146489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.084 qpair failed and we were unable to recover it. 00:24:10.084 [2024-04-24 22:15:52.146675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.146811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.146838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.084 qpair failed and we were unable to recover it. 00:24:10.084 [2024-04-24 22:15:52.147023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.147160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.147187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.084 qpair failed and we were unable to recover it. 00:24:10.084 [2024-04-24 22:15:52.147350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.147495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.147524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.084 qpair failed and we were unable to recover it. 00:24:10.084 [2024-04-24 22:15:52.147694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.147826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.147854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.084 qpair failed and we were unable to recover it. 00:24:10.084 [2024-04-24 22:15:52.148017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.148146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.148173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.084 qpair failed and we were unable to recover it. 00:24:10.084 [2024-04-24 22:15:52.148362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.148523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.148551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.084 qpair failed and we were unable to recover it. 00:24:10.084 [2024-04-24 22:15:52.148707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.148904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.148931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.084 qpair failed and we were unable to recover it. 00:24:10.084 [2024-04-24 22:15:52.149117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.149291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.149319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.084 qpair failed and we were unable to recover it. 00:24:10.084 [2024-04-24 22:15:52.149458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.149660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.149688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.084 qpair failed and we were unable to recover it. 00:24:10.084 [2024-04-24 22:15:52.149821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.149950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.149978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.084 qpair failed and we were unable to recover it. 00:24:10.084 [2024-04-24 22:15:52.150168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.150326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.150354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.084 qpair failed and we were unable to recover it. 00:24:10.084 [2024-04-24 22:15:52.150521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.150649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.150676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.084 qpair failed and we were unable to recover it. 00:24:10.084 [2024-04-24 22:15:52.150870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.151068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.151096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.084 qpair failed and we were unable to recover it. 00:24:10.084 [2024-04-24 22:15:52.151246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.151380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.151415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.084 qpair failed and we were unable to recover it. 00:24:10.084 [2024-04-24 22:15:52.151544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.151733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.151760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.084 qpair failed and we were unable to recover it. 00:24:10.084 [2024-04-24 22:15:52.151968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.152165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.152193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.084 qpair failed and we were unable to recover it. 00:24:10.084 [2024-04-24 22:15:52.152366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.152533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.152561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.084 qpair failed and we were unable to recover it. 00:24:10.084 [2024-04-24 22:15:52.152703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.152873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.084 [2024-04-24 22:15:52.152900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.084 qpair failed and we were unable to recover it. 00:24:10.084 [2024-04-24 22:15:52.153073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.153218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.153245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.085 qpair failed and we were unable to recover it. 00:24:10.085 [2024-04-24 22:15:52.153419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.153558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.153586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.085 qpair failed and we were unable to recover it. 00:24:10.085 [2024-04-24 22:15:52.153773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.153897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.153925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.085 qpair failed and we were unable to recover it. 00:24:10.085 [2024-04-24 22:15:52.154091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.154246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.154273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.085 qpair failed and we were unable to recover it. 00:24:10.085 [2024-04-24 22:15:52.154458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.154646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.154674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.085 qpair failed and we were unable to recover it. 00:24:10.085 [2024-04-24 22:15:52.154847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.155008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.155035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.085 qpair failed and we were unable to recover it. 00:24:10.085 [2024-04-24 22:15:52.155161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.155350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.155378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.085 qpair failed and we were unable to recover it. 00:24:10.085 [2024-04-24 22:15:52.155516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.155707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.155735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.085 qpair failed and we were unable to recover it. 00:24:10.085 [2024-04-24 22:15:52.155894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.156021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.156048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.085 qpair failed and we were unable to recover it. 00:24:10.085 [2024-04-24 22:15:52.156235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.156424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.156452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.085 qpair failed and we were unable to recover it. 00:24:10.085 [2024-04-24 22:15:52.156669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.156826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.156853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.085 qpair failed and we were unable to recover it. 00:24:10.085 [2024-04-24 22:15:52.157004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.157135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.157162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.085 qpair failed and we were unable to recover it. 00:24:10.085 [2024-04-24 22:15:52.157346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.157478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.157506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.085 qpair failed and we were unable to recover it. 00:24:10.085 [2024-04-24 22:15:52.157679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.157812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.157839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.085 qpair failed and we were unable to recover it. 00:24:10.085 [2024-04-24 22:15:52.158025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.158184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.158211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.085 qpair failed and we were unable to recover it. 00:24:10.085 [2024-04-24 22:15:52.158401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.158591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.158619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.085 qpair failed and we were unable to recover it. 00:24:10.085 [2024-04-24 22:15:52.158813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.158998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.159025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.085 qpair failed and we were unable to recover it. 00:24:10.085 [2024-04-24 22:15:52.159152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.159320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.159348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.085 qpair failed and we were unable to recover it. 00:24:10.085 [2024-04-24 22:15:52.159501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.159679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.159706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.085 qpair failed and we were unable to recover it. 00:24:10.085 [2024-04-24 22:15:52.159843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.159984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.160011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.085 qpair failed and we were unable to recover it. 00:24:10.085 [2024-04-24 22:15:52.160194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.160351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.160378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.085 qpair failed and we were unable to recover it. 00:24:10.085 [2024-04-24 22:15:52.160546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.160707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.160734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.085 qpair failed and we were unable to recover it. 00:24:10.085 [2024-04-24 22:15:52.160932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.161102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.161128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.085 qpair failed and we were unable to recover it. 00:24:10.085 [2024-04-24 22:15:52.161294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.161484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.161512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.085 qpair failed and we were unable to recover it. 00:24:10.085 [2024-04-24 22:15:52.161659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.161847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.161874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.085 qpair failed and we were unable to recover it. 00:24:10.085 [2024-04-24 22:15:52.162058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.162225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.085 [2024-04-24 22:15:52.162252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.085 qpair failed and we were unable to recover it. 00:24:10.085 [2024-04-24 22:15:52.162418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.162579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.162606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.086 qpair failed and we were unable to recover it. 00:24:10.086 [2024-04-24 22:15:52.162743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.162909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.162936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.086 qpair failed and we were unable to recover it. 00:24:10.086 [2024-04-24 22:15:52.163123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.163309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.163336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.086 qpair failed and we were unable to recover it. 00:24:10.086 [2024-04-24 22:15:52.163487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.163682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.163709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.086 qpair failed and we were unable to recover it. 00:24:10.086 [2024-04-24 22:15:52.163917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.164042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.164069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.086 qpair failed and we were unable to recover it. 00:24:10.086 [2024-04-24 22:15:52.164265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.164421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.164449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.086 qpair failed and we were unable to recover it. 00:24:10.086 [2024-04-24 22:15:52.164650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.164811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.164838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.086 qpair failed and we were unable to recover it. 00:24:10.086 [2024-04-24 22:15:52.165012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.165171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.165198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.086 qpair failed and we were unable to recover it. 00:24:10.086 [2024-04-24 22:15:52.165354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.165521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.165549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.086 qpair failed and we were unable to recover it. 00:24:10.086 [2024-04-24 22:15:52.165748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.165898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.165925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.086 qpair failed and we were unable to recover it. 00:24:10.086 [2024-04-24 22:15:52.166082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.166244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.166271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.086 qpair failed and we were unable to recover it. 00:24:10.086 [2024-04-24 22:15:52.166449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.166601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.166632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.086 qpair failed and we were unable to recover it. 00:24:10.086 [2024-04-24 22:15:52.166801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.166961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.166989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.086 qpair failed and we were unable to recover it. 00:24:10.086 [2024-04-24 22:15:52.167145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.167290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.167317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.086 qpair failed and we were unable to recover it. 00:24:10.086 [2024-04-24 22:15:52.167501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.167675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.167702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.086 qpair failed and we were unable to recover it. 00:24:10.086 [2024-04-24 22:15:52.167875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.168033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.168060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.086 qpair failed and we were unable to recover it. 00:24:10.086 [2024-04-24 22:15:52.168217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.168371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.168413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.086 qpair failed and we were unable to recover it. 00:24:10.086 [2024-04-24 22:15:52.168604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.168755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.168782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.086 qpair failed and we were unable to recover it. 00:24:10.086 [2024-04-24 22:15:52.168971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.169153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.169180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.086 qpair failed and we were unable to recover it. 00:24:10.086 [2024-04-24 22:15:52.169340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.169522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.169554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.086 qpair failed and we were unable to recover it. 00:24:10.086 [2024-04-24 22:15:52.169707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.169885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.169911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.086 qpair failed and we were unable to recover it. 00:24:10.086 [2024-04-24 22:15:52.170084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.170241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.170273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.086 qpair failed and we were unable to recover it. 00:24:10.086 [2024-04-24 22:15:52.170461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.170617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.170644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.086 qpair failed and we were unable to recover it. 00:24:10.086 [2024-04-24 22:15:52.170802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.170952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.170979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.086 qpair failed and we were unable to recover it. 00:24:10.086 [2024-04-24 22:15:52.171129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.171333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.171360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.086 qpair failed and we were unable to recover it. 00:24:10.086 [2024-04-24 22:15:52.171578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.171760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.171787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.086 qpair failed and we were unable to recover it. 00:24:10.086 [2024-04-24 22:15:52.171983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.172118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.172144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.086 qpair failed and we were unable to recover it. 00:24:10.086 [2024-04-24 22:15:52.172355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.172516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.172544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.086 qpair failed and we were unable to recover it. 00:24:10.086 [2024-04-24 22:15:52.172705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.172877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.086 [2024-04-24 22:15:52.172904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.086 qpair failed and we were unable to recover it. 00:24:10.087 [2024-04-24 22:15:52.173077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.173239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.173266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.087 qpair failed and we were unable to recover it. 00:24:10.087 [2024-04-24 22:15:52.173409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.173566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.173593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.087 qpair failed and we were unable to recover it. 00:24:10.087 [2024-04-24 22:15:52.173798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.173968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.174000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.087 qpair failed and we were unable to recover it. 00:24:10.087 [2024-04-24 22:15:52.174200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.174375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.174411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.087 qpair failed and we were unable to recover it. 00:24:10.087 [2024-04-24 22:15:52.174585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.174764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.174791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.087 qpair failed and we were unable to recover it. 00:24:10.087 [2024-04-24 22:15:52.174950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.175099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.175126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.087 qpair failed and we were unable to recover it. 00:24:10.087 [2024-04-24 22:15:52.175306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.175493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.175521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.087 qpair failed and we were unable to recover it. 00:24:10.087 [2024-04-24 22:15:52.175727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.175905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.175931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.087 qpair failed and we were unable to recover it. 00:24:10.087 [2024-04-24 22:15:52.176129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.176308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.176334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.087 qpair failed and we were unable to recover it. 00:24:10.087 [2024-04-24 22:15:52.176543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.176730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.176757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.087 qpair failed and we were unable to recover it. 00:24:10.087 [2024-04-24 22:15:52.176940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.177121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.177147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.087 qpair failed and we were unable to recover it. 00:24:10.087 [2024-04-24 22:15:52.177314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.177521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.177549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.087 qpair failed and we were unable to recover it. 00:24:10.087 [2024-04-24 22:15:52.177784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.177954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.177986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.087 qpair failed and we were unable to recover it. 00:24:10.087 [2024-04-24 22:15:52.178181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.178317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.178344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.087 qpair failed and we were unable to recover it. 00:24:10.087 [2024-04-24 22:15:52.178524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.178667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.178694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.087 qpair failed and we were unable to recover it. 00:24:10.087 [2024-04-24 22:15:52.178875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.179066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.179093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.087 qpair failed and we were unable to recover it. 00:24:10.087 [2024-04-24 22:15:52.179355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.179530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.179557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.087 qpair failed and we were unable to recover it. 00:24:10.087 [2024-04-24 22:15:52.179726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.179909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.179936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.087 qpair failed and we were unable to recover it. 00:24:10.087 [2024-04-24 22:15:52.180135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.180341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.180367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.087 qpair failed and we were unable to recover it. 00:24:10.087 [2024-04-24 22:15:52.180580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.180765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.180793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.087 qpair failed and we were unable to recover it. 00:24:10.087 [2024-04-24 22:15:52.180990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.181163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.181190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.087 qpair failed and we were unable to recover it. 00:24:10.087 [2024-04-24 22:15:52.181371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.181564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.181591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.087 qpair failed and we were unable to recover it. 00:24:10.087 [2024-04-24 22:15:52.181755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.181923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.181950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.087 qpair failed and we were unable to recover it. 00:24:10.087 [2024-04-24 22:15:52.182147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.182367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.182401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.087 qpair failed and we were unable to recover it. 00:24:10.087 [2024-04-24 22:15:52.182570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.182786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.182813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.087 qpair failed and we were unable to recover it. 00:24:10.087 [2024-04-24 22:15:52.182936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.087 [2024-04-24 22:15:52.183168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.183195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.088 qpair failed and we were unable to recover it. 00:24:10.088 [2024-04-24 22:15:52.183383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.183592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.183619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.088 qpair failed and we were unable to recover it. 00:24:10.088 [2024-04-24 22:15:52.183819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.184027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.184054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.088 qpair failed and we were unable to recover it. 00:24:10.088 [2024-04-24 22:15:52.184255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.184439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.184467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.088 qpair failed and we were unable to recover it. 00:24:10.088 [2024-04-24 22:15:52.184600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.184757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.184792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.088 qpair failed and we were unable to recover it. 00:24:10.088 [2024-04-24 22:15:52.184962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.185146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.185173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.088 qpair failed and we were unable to recover it. 00:24:10.088 [2024-04-24 22:15:52.185378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.185571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.185598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.088 qpair failed and we were unable to recover it. 00:24:10.088 [2024-04-24 22:15:52.185771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.185923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.185950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.088 qpair failed and we were unable to recover it. 00:24:10.088 [2024-04-24 22:15:52.186083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.186294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.186321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.088 qpair failed and we were unable to recover it. 00:24:10.088 [2024-04-24 22:15:52.186502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.186651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.186678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.088 qpair failed and we were unable to recover it. 00:24:10.088 [2024-04-24 22:15:52.186806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.187023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.187050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.088 qpair failed and we were unable to recover it. 00:24:10.088 [2024-04-24 22:15:52.187235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.187407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.187438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.088 qpair failed and we were unable to recover it. 00:24:10.088 [2024-04-24 22:15:52.187601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.187797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.187824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.088 qpair failed and we were unable to recover it. 00:24:10.088 [2024-04-24 22:15:52.188065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.188214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.188241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.088 qpair failed and we were unable to recover it. 00:24:10.088 [2024-04-24 22:15:52.188467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.188656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.188686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.088 qpair failed and we were unable to recover it. 00:24:10.088 [2024-04-24 22:15:52.188860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.189004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.189031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.088 qpair failed and we were unable to recover it. 00:24:10.088 [2024-04-24 22:15:52.189249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.189463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.189491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.088 qpair failed and we were unable to recover it. 00:24:10.088 [2024-04-24 22:15:52.189705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.189924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.189950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.088 qpair failed and we were unable to recover it. 00:24:10.088 [2024-04-24 22:15:52.190145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.190306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.190333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.088 qpair failed and we were unable to recover it. 00:24:10.088 [2024-04-24 22:15:52.190505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.190712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.190738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.088 qpair failed and we were unable to recover it. 00:24:10.088 [2024-04-24 22:15:52.190916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.191066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.191094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.088 qpair failed and we were unable to recover it. 00:24:10.088 [2024-04-24 22:15:52.191274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.191456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.191485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.088 qpair failed and we were unable to recover it. 00:24:10.088 [2024-04-24 22:15:52.191660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.191857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.191884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.088 qpair failed and we were unable to recover it. 00:24:10.088 [2024-04-24 22:15:52.192094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.192299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.192326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.088 qpair failed and we were unable to recover it. 00:24:10.088 [2024-04-24 22:15:52.192507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.192671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.192698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.088 qpair failed and we were unable to recover it. 00:24:10.088 [2024-04-24 22:15:52.192900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.193082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.193109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.088 qpair failed and we were unable to recover it. 00:24:10.088 [2024-04-24 22:15:52.193247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.193404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.193432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.088 qpair failed and we were unable to recover it. 00:24:10.088 [2024-04-24 22:15:52.193639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.193831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.088 [2024-04-24 22:15:52.193858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.089 qpair failed and we were unable to recover it. 00:24:10.089 [2024-04-24 22:15:52.194109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.194272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.194299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.089 qpair failed and we were unable to recover it. 00:24:10.089 [2024-04-24 22:15:52.194514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.194706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.194733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.089 qpair failed and we were unable to recover it. 00:24:10.089 [2024-04-24 22:15:52.194929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.195098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.195125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.089 qpair failed and we were unable to recover it. 00:24:10.089 [2024-04-24 22:15:52.195299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.195470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.195498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.089 qpair failed and we were unable to recover it. 00:24:10.089 [2024-04-24 22:15:52.195693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.195891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.195918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.089 qpair failed and we were unable to recover it. 00:24:10.089 [2024-04-24 22:15:52.196094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.196263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.196290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.089 qpair failed and we were unable to recover it. 00:24:10.089 [2024-04-24 22:15:52.196463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.196642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.196669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.089 qpair failed and we were unable to recover it. 00:24:10.089 [2024-04-24 22:15:52.196880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.197064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.197091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.089 qpair failed and we were unable to recover it. 00:24:10.089 [2024-04-24 22:15:52.197264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.197430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.197458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.089 qpair failed and we were unable to recover it. 00:24:10.089 [2024-04-24 22:15:52.197661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.197816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.197843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.089 qpair failed and we were unable to recover it. 00:24:10.089 [2024-04-24 22:15:52.198018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.198230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.198258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.089 qpair failed and we were unable to recover it. 00:24:10.089 [2024-04-24 22:15:52.198444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.198612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.198639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.089 qpair failed and we were unable to recover it. 00:24:10.089 [2024-04-24 22:15:52.198823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.199041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.199068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.089 qpair failed and we were unable to recover it. 00:24:10.089 [2024-04-24 22:15:52.199240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.199423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.199451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.089 qpair failed and we were unable to recover it. 00:24:10.089 [2024-04-24 22:15:52.199651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.199814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.199841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.089 qpair failed and we were unable to recover it. 00:24:10.089 [2024-04-24 22:15:52.200015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.200201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.200228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.089 qpair failed and we were unable to recover it. 00:24:10.089 [2024-04-24 22:15:52.200453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.200663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.200690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.089 qpair failed and we were unable to recover it. 00:24:10.089 [2024-04-24 22:15:52.200912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.201120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.201147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.089 qpair failed and we were unable to recover it. 00:24:10.089 [2024-04-24 22:15:52.201342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.201520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.201547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.089 qpair failed and we were unable to recover it. 00:24:10.089 [2024-04-24 22:15:52.201731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.201926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.201953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.089 qpair failed and we were unable to recover it. 00:24:10.089 [2024-04-24 22:15:52.202161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.202359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.202386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.089 qpair failed and we were unable to recover it. 00:24:10.089 [2024-04-24 22:15:52.202564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.089 [2024-04-24 22:15:52.202740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.202767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.090 qpair failed and we were unable to recover it. 00:24:10.090 [2024-04-24 22:15:52.202934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.203116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.203143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.090 qpair failed and we were unable to recover it. 00:24:10.090 [2024-04-24 22:15:52.203314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.203540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.203569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.090 qpair failed and we were unable to recover it. 00:24:10.090 [2024-04-24 22:15:52.203735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.203878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.203905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.090 qpair failed and we were unable to recover it. 00:24:10.090 [2024-04-24 22:15:52.204102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.204265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.204292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.090 qpair failed and we were unable to recover it. 00:24:10.090 [2024-04-24 22:15:52.204456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.204624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.204651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.090 qpair failed and we were unable to recover it. 00:24:10.090 [2024-04-24 22:15:52.204856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.205043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.205071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.090 qpair failed and we were unable to recover it. 00:24:10.090 [2024-04-24 22:15:52.205229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.205359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.205386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.090 qpair failed and we were unable to recover it. 00:24:10.090 [2024-04-24 22:15:52.205602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.205809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.205836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.090 qpair failed and we were unable to recover it. 00:24:10.090 [2024-04-24 22:15:52.206050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.206219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.206247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.090 qpair failed and we were unable to recover it. 00:24:10.090 [2024-04-24 22:15:52.206452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.206593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.206620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.090 qpair failed and we were unable to recover it. 00:24:10.090 [2024-04-24 22:15:52.206815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.206977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.207004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.090 qpair failed and we were unable to recover it. 00:24:10.090 [2024-04-24 22:15:52.207186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.207357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.207384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.090 qpair failed and we were unable to recover it. 00:24:10.090 [2024-04-24 22:15:52.207578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.207771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.207798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.090 qpair failed and we were unable to recover it. 00:24:10.090 [2024-04-24 22:15:52.208008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.208220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.208247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.090 qpair failed and we were unable to recover it. 00:24:10.090 [2024-04-24 22:15:52.208392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.208622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.208649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.090 qpair failed and we were unable to recover it. 00:24:10.090 [2024-04-24 22:15:52.208830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.208992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.209019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.090 qpair failed and we were unable to recover it. 00:24:10.090 [2024-04-24 22:15:52.209210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.209368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.209410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.090 qpair failed and we were unable to recover it. 00:24:10.090 [2024-04-24 22:15:52.209573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.209737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.209764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.090 qpair failed and we were unable to recover it. 00:24:10.090 [2024-04-24 22:15:52.209976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.210162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.210189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.090 qpair failed and we were unable to recover it. 00:24:10.090 [2024-04-24 22:15:52.210375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.210589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.210617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.090 qpair failed and we were unable to recover it. 00:24:10.090 [2024-04-24 22:15:52.210841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.211001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.211028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.090 qpair failed and we were unable to recover it. 00:24:10.090 [2024-04-24 22:15:52.211207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.211408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.211437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.090 qpair failed and we were unable to recover it. 00:24:10.090 [2024-04-24 22:15:52.211639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.211809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.211836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.090 qpair failed and we were unable to recover it. 00:24:10.090 [2024-04-24 22:15:52.212031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.212230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.212257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.090 qpair failed and we were unable to recover it. 00:24:10.090 [2024-04-24 22:15:52.212446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.212631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.212658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.090 qpair failed and we were unable to recover it. 00:24:10.090 [2024-04-24 22:15:52.212824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.212994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.213021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.090 qpair failed and we were unable to recover it. 00:24:10.090 [2024-04-24 22:15:52.213230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.213399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.213427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.090 qpair failed and we were unable to recover it. 00:24:10.090 [2024-04-24 22:15:52.213625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.213822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.090 [2024-04-24 22:15:52.213849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.091 qpair failed and we were unable to recover it. 00:24:10.091 [2024-04-24 22:15:52.214060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.214270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.214297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.091 qpair failed and we were unable to recover it. 00:24:10.091 [2024-04-24 22:15:52.214472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.214639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.214667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.091 qpair failed and we were unable to recover it. 00:24:10.091 [2024-04-24 22:15:52.214816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.214998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.215025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.091 qpair failed and we were unable to recover it. 00:24:10.091 [2024-04-24 22:15:52.215231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.215414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.215442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.091 qpair failed and we were unable to recover it. 00:24:10.091 [2024-04-24 22:15:52.215622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.215811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.215838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.091 qpair failed and we were unable to recover it. 00:24:10.091 [2024-04-24 22:15:52.216058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.216223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.216250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.091 qpair failed and we were unable to recover it. 00:24:10.091 [2024-04-24 22:15:52.216465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.216647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.216674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.091 qpair failed and we were unable to recover it. 00:24:10.091 [2024-04-24 22:15:52.216855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.217013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.217040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.091 qpair failed and we were unable to recover it. 00:24:10.091 [2024-04-24 22:15:52.217227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.217408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.217436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.091 qpair failed and we were unable to recover it. 00:24:10.091 [2024-04-24 22:15:52.217650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.217835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.217862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.091 qpair failed and we were unable to recover it. 00:24:10.091 [2024-04-24 22:15:52.218035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.218244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.218272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.091 qpair failed and we were unable to recover it. 00:24:10.091 [2024-04-24 22:15:52.218541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.218797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.218824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.091 qpair failed and we were unable to recover it. 00:24:10.091 [2024-04-24 22:15:52.219030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.219212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.219239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.091 qpair failed and we were unable to recover it. 00:24:10.091 [2024-04-24 22:15:52.219415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.219595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.219622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.091 qpair failed and we were unable to recover it. 00:24:10.091 [2024-04-24 22:15:52.219821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.219997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.220023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.091 qpair failed and we were unable to recover it. 00:24:10.091 [2024-04-24 22:15:52.220192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.220362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.220389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.091 qpair failed and we were unable to recover it. 00:24:10.091 [2024-04-24 22:15:52.220545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.220737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.220764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.091 qpair failed and we were unable to recover it. 00:24:10.091 [2024-04-24 22:15:52.220950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.221149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.221176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.091 qpair failed and we were unable to recover it. 00:24:10.091 [2024-04-24 22:15:52.221347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.221527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.221555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.091 qpair failed and we were unable to recover it. 00:24:10.091 [2024-04-24 22:15:52.221719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.221900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.221928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.091 qpair failed and we were unable to recover it. 00:24:10.091 [2024-04-24 22:15:52.222113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.222296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.222323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.091 qpair failed and we were unable to recover it. 00:24:10.091 [2024-04-24 22:15:52.222526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.222727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.222754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.091 qpair failed and we were unable to recover it. 00:24:10.091 [2024-04-24 22:15:52.222944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.223118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.223146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.091 qpair failed and we were unable to recover it. 00:24:10.091 [2024-04-24 22:15:52.223308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.223484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.223512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.091 qpair failed and we were unable to recover it. 00:24:10.091 [2024-04-24 22:15:52.223716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.223910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.223937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.091 qpair failed and we were unable to recover it. 00:24:10.091 [2024-04-24 22:15:52.224176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.224388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.224424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.091 qpair failed and we were unable to recover it. 00:24:10.091 [2024-04-24 22:15:52.224700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.224913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.091 [2024-04-24 22:15:52.224940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.091 qpair failed and we were unable to recover it. 00:24:10.091 [2024-04-24 22:15:52.225106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.225293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.225320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.092 qpair failed and we were unable to recover it. 00:24:10.092 [2024-04-24 22:15:52.225525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.225691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.225718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.092 qpair failed and we were unable to recover it. 00:24:10.092 [2024-04-24 22:15:52.225940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.226067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.226094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.092 qpair failed and we were unable to recover it. 00:24:10.092 [2024-04-24 22:15:52.226274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.226439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.226468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.092 qpair failed and we were unable to recover it. 00:24:10.092 [2024-04-24 22:15:52.226673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.226843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.226870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.092 qpair failed and we were unable to recover it. 00:24:10.092 [2024-04-24 22:15:52.227041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.227252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.227279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.092 qpair failed and we were unable to recover it. 00:24:10.092 [2024-04-24 22:15:52.227472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.227645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.227672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.092 qpair failed and we were unable to recover it. 00:24:10.092 [2024-04-24 22:15:52.227873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.228068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.228095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.092 qpair failed and we were unable to recover it. 00:24:10.092 [2024-04-24 22:15:52.228267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.228478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.228506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.092 qpair failed and we were unable to recover it. 00:24:10.092 [2024-04-24 22:15:52.228704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.228887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.228915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.092 qpair failed and we were unable to recover it. 00:24:10.092 [2024-04-24 22:15:52.229117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.229314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.229341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.092 qpair failed and we were unable to recover it. 00:24:10.092 [2024-04-24 22:15:52.229552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.229738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.229765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.092 qpair failed and we were unable to recover it. 00:24:10.092 [2024-04-24 22:15:52.229911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.230094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.230121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.092 qpair failed and we were unable to recover it. 00:24:10.092 [2024-04-24 22:15:52.230298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.230468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.230501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.092 qpair failed and we were unable to recover it. 00:24:10.092 [2024-04-24 22:15:52.230682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.230863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.230890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.092 qpair failed and we were unable to recover it. 00:24:10.092 [2024-04-24 22:15:52.231061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.231257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.231284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.092 qpair failed and we were unable to recover it. 00:24:10.092 [2024-04-24 22:15:52.231481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.231658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.231685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.092 qpair failed and we were unable to recover it. 00:24:10.092 [2024-04-24 22:15:52.231823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.232019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.232047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.092 qpair failed and we were unable to recover it. 00:24:10.092 [2024-04-24 22:15:52.232239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.232403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.232431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.092 qpair failed and we were unable to recover it. 00:24:10.092 [2024-04-24 22:15:52.232635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.232846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.232873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.092 qpair failed and we were unable to recover it. 00:24:10.092 [2024-04-24 22:15:52.233064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.233234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.233262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.092 qpair failed and we were unable to recover it. 00:24:10.092 [2024-04-24 22:15:52.233401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.233574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.233601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.092 qpair failed and we were unable to recover it. 00:24:10.092 [2024-04-24 22:15:52.233780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.233965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.233992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.092 qpair failed and we were unable to recover it. 00:24:10.092 [2024-04-24 22:15:52.234166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.234342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.234374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.092 qpair failed and we were unable to recover it. 00:24:10.092 [2024-04-24 22:15:52.234607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.234753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.234780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.092 qpair failed and we were unable to recover it. 00:24:10.092 [2024-04-24 22:15:52.234974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.235175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.235202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.092 qpair failed and we were unable to recover it. 00:24:10.092 [2024-04-24 22:15:52.235373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.235574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.092 [2024-04-24 22:15:52.235601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.092 qpair failed and we were unable to recover it. 00:24:10.093 [2024-04-24 22:15:52.235825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.236020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.236047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.093 qpair failed and we were unable to recover it. 00:24:10.093 [2024-04-24 22:15:52.236251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.236447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.236475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.093 qpair failed and we were unable to recover it. 00:24:10.093 [2024-04-24 22:15:52.236664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.236864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.236891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.093 qpair failed and we were unable to recover it. 00:24:10.093 [2024-04-24 22:15:52.237088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.237247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.237274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.093 qpair failed and we were unable to recover it. 00:24:10.093 [2024-04-24 22:15:52.237474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.237692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.237719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.093 qpair failed and we were unable to recover it. 00:24:10.093 [2024-04-24 22:15:52.237915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.238074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.238101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.093 qpair failed and we were unable to recover it. 00:24:10.093 [2024-04-24 22:15:52.238295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.238470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.238503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.093 qpair failed and we were unable to recover it. 00:24:10.093 [2024-04-24 22:15:52.238713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.238894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.238922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.093 qpair failed and we were unable to recover it. 00:24:10.093 [2024-04-24 22:15:52.239117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.239301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.239328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.093 qpair failed and we were unable to recover it. 00:24:10.093 [2024-04-24 22:15:52.239502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.239705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.239732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.093 qpair failed and we were unable to recover it. 00:24:10.093 [2024-04-24 22:15:52.239910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.240080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.240107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.093 qpair failed and we were unable to recover it. 00:24:10.093 [2024-04-24 22:15:52.240303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.240472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.240500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.093 qpair failed and we were unable to recover it. 00:24:10.093 [2024-04-24 22:15:52.240665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.240835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.240862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.093 qpair failed and we were unable to recover it. 00:24:10.093 [2024-04-24 22:15:52.241071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.241263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.241291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.093 qpair failed and we were unable to recover it. 00:24:10.093 [2024-04-24 22:15:52.241469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.241685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.241712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.093 qpair failed and we were unable to recover it. 00:24:10.093 [2024-04-24 22:15:52.241878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.242082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.242109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.093 qpair failed and we were unable to recover it. 00:24:10.093 [2024-04-24 22:15:52.242309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.242482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.242515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.093 qpair failed and we were unable to recover it. 00:24:10.093 [2024-04-24 22:15:52.242732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.242898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.242925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.093 qpair failed and we were unable to recover it. 00:24:10.093 [2024-04-24 22:15:52.243123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.243279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.243306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.093 qpair failed and we were unable to recover it. 00:24:10.093 [2024-04-24 22:15:52.243452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.243647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.243675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.093 qpair failed and we were unable to recover it. 00:24:10.093 [2024-04-24 22:15:52.243875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.244044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.244071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.093 qpair failed and we were unable to recover it. 00:24:10.093 [2024-04-24 22:15:52.244257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.244424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.244452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.093 qpair failed and we were unable to recover it. 00:24:10.093 [2024-04-24 22:15:52.244660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.244825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.093 [2024-04-24 22:15:52.244852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.094 qpair failed and we were unable to recover it. 00:24:10.094 [2024-04-24 22:15:52.245051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.245234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.245261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.094 qpair failed and we were unable to recover it. 00:24:10.094 [2024-04-24 22:15:52.245459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.245639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.245666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.094 qpair failed and we were unable to recover it. 00:24:10.094 [2024-04-24 22:15:52.245873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.246045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.246072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.094 qpair failed and we were unable to recover it. 00:24:10.094 [2024-04-24 22:15:52.246241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.246408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.246436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.094 qpair failed and we were unable to recover it. 00:24:10.094 [2024-04-24 22:15:52.246656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.246840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.246867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.094 qpair failed and we were unable to recover it. 00:24:10.094 [2024-04-24 22:15:52.247066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.247249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.247276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.094 qpair failed and we were unable to recover it. 00:24:10.094 [2024-04-24 22:15:52.247458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.247647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.247674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.094 qpair failed and we were unable to recover it. 00:24:10.094 [2024-04-24 22:15:52.247878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.248072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.248099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.094 qpair failed and we were unable to recover it. 00:24:10.094 [2024-04-24 22:15:52.248303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.248453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.248482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.094 qpair failed and we were unable to recover it. 00:24:10.094 [2024-04-24 22:15:52.248678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.248846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.248873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.094 qpair failed and we were unable to recover it. 00:24:10.094 [2024-04-24 22:15:52.249081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.249269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.249296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.094 qpair failed and we were unable to recover it. 00:24:10.094 [2024-04-24 22:15:52.249496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.249668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.249695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.094 qpair failed and we were unable to recover it. 00:24:10.094 [2024-04-24 22:15:52.249867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.250054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.250080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.094 qpair failed and we were unable to recover it. 00:24:10.094 [2024-04-24 22:15:52.250286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.250485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.250513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.094 qpair failed and we were unable to recover it. 00:24:10.094 [2024-04-24 22:15:52.250718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.250913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.250940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.094 qpair failed and we were unable to recover it. 00:24:10.094 [2024-04-24 22:15:52.251179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.251356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.251383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.094 qpair failed and we were unable to recover it. 00:24:10.094 [2024-04-24 22:15:52.251580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.251772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.251800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.094 qpair failed and we were unable to recover it. 00:24:10.094 [2024-04-24 22:15:52.252000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.252196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.252223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.094 qpair failed and we were unable to recover it. 00:24:10.094 [2024-04-24 22:15:52.252423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.252597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.252624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.094 qpair failed and we were unable to recover it. 00:24:10.094 [2024-04-24 22:15:52.252793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.252928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.252955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.094 qpair failed and we were unable to recover it. 00:24:10.094 [2024-04-24 22:15:52.253139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.253327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.253354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.094 qpair failed and we were unable to recover it. 00:24:10.094 [2024-04-24 22:15:52.253567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.253720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.253747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.094 qpair failed and we were unable to recover it. 00:24:10.094 [2024-04-24 22:15:52.253906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.254096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.254123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.094 qpair failed and we were unable to recover it. 00:24:10.094 [2024-04-24 22:15:52.254312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.254489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.254517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.094 qpair failed and we were unable to recover it. 00:24:10.094 [2024-04-24 22:15:52.254698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.254904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.254930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.094 qpair failed and we were unable to recover it. 00:24:10.094 [2024-04-24 22:15:52.255113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.255317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.094 [2024-04-24 22:15:52.255344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.094 qpair failed and we were unable to recover it. 00:24:10.094 [2024-04-24 22:15:52.255556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.255759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.255786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.095 qpair failed and we were unable to recover it. 00:24:10.095 [2024-04-24 22:15:52.256025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.256195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.256222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.095 qpair failed and we were unable to recover it. 00:24:10.095 [2024-04-24 22:15:52.256426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.256591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.256618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.095 qpair failed and we were unable to recover it. 00:24:10.095 [2024-04-24 22:15:52.256815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.257011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.257038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.095 qpair failed and we were unable to recover it. 00:24:10.095 [2024-04-24 22:15:52.257192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.257411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.257439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.095 qpair failed and we were unable to recover it. 00:24:10.095 [2024-04-24 22:15:52.257612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.257808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.257836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.095 qpair failed and we were unable to recover it. 00:24:10.095 [2024-04-24 22:15:52.258027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.258197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.258224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.095 qpair failed and we were unable to recover it. 00:24:10.095 [2024-04-24 22:15:52.258426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.258627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.258655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.095 qpair failed and we were unable to recover it. 00:24:10.095 [2024-04-24 22:15:52.258890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.259080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.259107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.095 qpair failed and we were unable to recover it. 00:24:10.095 [2024-04-24 22:15:52.259310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.259518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.259546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.095 qpair failed and we were unable to recover it. 00:24:10.095 [2024-04-24 22:15:52.259734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.259889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.259916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.095 qpair failed and we were unable to recover it. 00:24:10.095 [2024-04-24 22:15:52.260101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.260271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.260297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.095 qpair failed and we were unable to recover it. 00:24:10.095 [2024-04-24 22:15:52.260503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.260680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.260708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.095 qpair failed and we were unable to recover it. 00:24:10.095 [2024-04-24 22:15:52.260889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.261079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.261105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.095 qpair failed and we were unable to recover it. 00:24:10.095 [2024-04-24 22:15:52.261277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.261457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.261486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.095 qpair failed and we were unable to recover it. 00:24:10.095 [2024-04-24 22:15:52.261680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.261821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.261849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.095 qpair failed and we were unable to recover it. 00:24:10.095 [2024-04-24 22:15:52.262032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.262183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.262210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.095 qpair failed and we were unable to recover it. 00:24:10.095 [2024-04-24 22:15:52.262369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.262533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.262560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.095 qpair failed and we were unable to recover it. 00:24:10.095 [2024-04-24 22:15:52.262748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.262871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.262899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.095 qpair failed and we were unable to recover it. 00:24:10.095 [2024-04-24 22:15:52.263062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.263285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.263312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.095 qpair failed and we were unable to recover it. 00:24:10.095 [2024-04-24 22:15:52.263489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.263659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.263686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.095 qpair failed and we were unable to recover it. 00:24:10.095 [2024-04-24 22:15:52.263863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.264017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.264044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.095 qpair failed and we were unable to recover it. 00:24:10.095 [2024-04-24 22:15:52.264224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.264424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.264453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.095 qpair failed and we were unable to recover it. 00:24:10.095 [2024-04-24 22:15:52.264616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.264784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.264811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.095 qpair failed and we were unable to recover it. 00:24:10.095 [2024-04-24 22:15:52.264993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.265198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.265225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.095 qpair failed and we were unable to recover it. 00:24:10.095 [2024-04-24 22:15:52.265423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.265608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.095 [2024-04-24 22:15:52.265635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.095 qpair failed and we were unable to recover it. 00:24:10.096 [2024-04-24 22:15:52.265837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.266005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.266032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.096 qpair failed and we were unable to recover it. 00:24:10.096 [2024-04-24 22:15:52.266192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.266362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.266390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.096 qpair failed and we were unable to recover it. 00:24:10.096 [2024-04-24 22:15:52.266604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.266771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.266798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.096 qpair failed and we were unable to recover it. 00:24:10.096 [2024-04-24 22:15:52.266966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.267129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.267156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.096 qpair failed and we were unable to recover it. 00:24:10.096 [2024-04-24 22:15:52.267320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.267494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.267522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.096 qpair failed and we were unable to recover it. 00:24:10.096 [2024-04-24 22:15:52.267701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.267874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.267901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.096 qpair failed and we were unable to recover it. 00:24:10.096 [2024-04-24 22:15:52.268082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.268256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.268283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.096 qpair failed and we were unable to recover it. 00:24:10.096 [2024-04-24 22:15:52.268466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.268641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.268669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.096 qpair failed and we were unable to recover it. 00:24:10.096 [2024-04-24 22:15:52.268848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.269013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.269039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.096 qpair failed and we were unable to recover it. 00:24:10.096 [2024-04-24 22:15:52.269255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.269408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.269437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.096 qpair failed and we were unable to recover it. 00:24:10.096 [2024-04-24 22:15:52.269600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.269805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.269832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.096 qpair failed and we were unable to recover it. 00:24:10.096 [2024-04-24 22:15:52.270080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.270289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.270316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.096 qpair failed and we were unable to recover it. 00:24:10.096 [2024-04-24 22:15:52.270517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.270691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.270719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.096 qpair failed and we were unable to recover it. 00:24:10.096 [2024-04-24 22:15:52.270916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.271120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.271147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.096 qpair failed and we were unable to recover it. 00:24:10.096 [2024-04-24 22:15:52.271323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.271462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.271491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.096 qpair failed and we were unable to recover it. 00:24:10.096 [2024-04-24 22:15:52.271666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.271826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.271853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.096 qpair failed and we were unable to recover it. 00:24:10.096 [2024-04-24 22:15:52.271997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.272196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.272224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.096 qpair failed and we were unable to recover it. 00:24:10.096 [2024-04-24 22:15:52.272444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.272606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.272634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.096 qpair failed and we were unable to recover it. 00:24:10.096 [2024-04-24 22:15:52.272805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.272933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.272960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.096 qpair failed and we were unable to recover it. 00:24:10.096 [2024-04-24 22:15:52.273159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.273403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.273431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.096 qpair failed and we were unable to recover it. 00:24:10.096 [2024-04-24 22:15:52.273619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.273804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.273831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.096 qpair failed and we were unable to recover it. 00:24:10.096 [2024-04-24 22:15:52.274031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.274186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.274213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.096 qpair failed and we were unable to recover it. 00:24:10.096 [2024-04-24 22:15:52.274374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.274557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.274584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.096 qpair failed and we were unable to recover it. 00:24:10.096 [2024-04-24 22:15:52.274750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.274957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.274984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.096 qpair failed and we were unable to recover it. 00:24:10.096 [2024-04-24 22:15:52.275170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.275342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.275369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.096 qpair failed and we were unable to recover it. 00:24:10.096 [2024-04-24 22:15:52.275585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.275797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.275824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.096 qpair failed and we were unable to recover it. 00:24:10.096 [2024-04-24 22:15:52.276003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.276162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.276196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.096 qpair failed and we were unable to recover it. 00:24:10.096 [2024-04-24 22:15:52.276363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.276565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.276593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.096 qpair failed and we were unable to recover it. 00:24:10.096 [2024-04-24 22:15:52.276805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.096 [2024-04-24 22:15:52.276970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.276997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.097 qpair failed and we were unable to recover it. 00:24:10.097 [2024-04-24 22:15:52.277198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.277390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.277427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.097 qpair failed and we were unable to recover it. 00:24:10.097 [2024-04-24 22:15:52.277628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.277810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.277838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.097 qpair failed and we were unable to recover it. 00:24:10.097 [2024-04-24 22:15:52.278045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.278232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.278259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.097 qpair failed and we were unable to recover it. 00:24:10.097 [2024-04-24 22:15:52.278423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.278631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.278658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.097 qpair failed and we were unable to recover it. 00:24:10.097 [2024-04-24 22:15:52.278831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.279048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.279076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.097 qpair failed and we were unable to recover it. 00:24:10.097 [2024-04-24 22:15:52.279297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.279499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.279527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.097 qpair failed and we were unable to recover it. 00:24:10.097 [2024-04-24 22:15:52.279784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.279919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.279946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.097 qpair failed and we were unable to recover it. 00:24:10.097 [2024-04-24 22:15:52.280152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.280313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.280340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.097 qpair failed and we were unable to recover it. 00:24:10.097 [2024-04-24 22:15:52.280531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.280742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.280769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.097 qpair failed and we were unable to recover it. 00:24:10.097 [2024-04-24 22:15:52.280941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.281108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.281135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.097 qpair failed and we were unable to recover it. 00:24:10.097 [2024-04-24 22:15:52.281277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.281470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.281498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.097 qpair failed and we were unable to recover it. 00:24:10.097 [2024-04-24 22:15:52.281683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.281818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.281846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.097 qpair failed and we were unable to recover it. 00:24:10.097 [2024-04-24 22:15:52.282039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.282236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.282263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.097 qpair failed and we were unable to recover it. 00:24:10.097 [2024-04-24 22:15:52.282471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.282634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.282662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.097 qpair failed and we were unable to recover it. 00:24:10.097 [2024-04-24 22:15:52.282870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.283004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.283032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.097 qpair failed and we were unable to recover it. 00:24:10.097 [2024-04-24 22:15:52.283215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.283375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.283409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.097 qpair failed and we were unable to recover it. 00:24:10.097 [2024-04-24 22:15:52.283571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.283729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.283756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.097 qpair failed and we were unable to recover it. 00:24:10.097 [2024-04-24 22:15:52.283939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.284143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.284170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.097 qpair failed and we were unable to recover it. 00:24:10.097 [2024-04-24 22:15:52.284388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.284578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.284605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.097 qpair failed and we were unable to recover it. 00:24:10.097 [2024-04-24 22:15:52.284787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.284971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.284998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.097 qpair failed and we were unable to recover it. 00:24:10.097 [2024-04-24 22:15:52.285212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.285403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.285431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.097 qpair failed and we were unable to recover it. 00:24:10.097 [2024-04-24 22:15:52.285590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.285783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.285810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.097 qpair failed and we were unable to recover it. 00:24:10.097 [2024-04-24 22:15:52.285982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.286178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.286206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.097 qpair failed and we were unable to recover it. 00:24:10.097 [2024-04-24 22:15:52.286404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.286617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.286645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.097 qpair failed and we were unable to recover it. 00:24:10.097 [2024-04-24 22:15:52.286900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.287080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.287107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.097 qpair failed and we were unable to recover it. 00:24:10.097 [2024-04-24 22:15:52.287236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.097 [2024-04-24 22:15:52.287420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.287449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.098 qpair failed and we were unable to recover it. 00:24:10.098 [2024-04-24 22:15:52.287577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.287735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.287763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.098 qpair failed and we were unable to recover it. 00:24:10.098 [2024-04-24 22:15:52.288014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.288256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.288283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.098 qpair failed and we were unable to recover it. 00:24:10.098 [2024-04-24 22:15:52.288459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.288627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.288655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.098 qpair failed and we were unable to recover it. 00:24:10.098 [2024-04-24 22:15:52.288840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.289056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.289083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.098 qpair failed and we were unable to recover it. 00:24:10.098 [2024-04-24 22:15:52.289285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.289446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.289474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.098 qpair failed and we were unable to recover it. 00:24:10.098 [2024-04-24 22:15:52.289670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.289851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.289878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.098 qpair failed and we were unable to recover it. 00:24:10.098 [2024-04-24 22:15:52.290019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.290232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.290259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.098 qpair failed and we were unable to recover it. 00:24:10.098 [2024-04-24 22:15:52.290455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.290595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.290623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.098 qpair failed and we were unable to recover it. 00:24:10.098 [2024-04-24 22:15:52.290847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.291059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.291091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.098 qpair failed and we were unable to recover it. 00:24:10.098 [2024-04-24 22:15:52.291301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.291505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.291533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.098 qpair failed and we were unable to recover it. 00:24:10.098 [2024-04-24 22:15:52.291750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.291961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.291988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.098 qpair failed and we were unable to recover it. 00:24:10.098 [2024-04-24 22:15:52.292194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.292411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.292439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.098 qpair failed and we were unable to recover it. 00:24:10.098 [2024-04-24 22:15:52.292612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.292785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.292812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.098 qpair failed and we were unable to recover it. 00:24:10.098 [2024-04-24 22:15:52.293014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.293171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.293198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.098 qpair failed and we were unable to recover it. 00:24:10.098 [2024-04-24 22:15:52.293410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.293569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.293596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.098 qpair failed and we were unable to recover it. 00:24:10.098 [2024-04-24 22:15:52.293764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.293942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.293969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.098 qpair failed and we were unable to recover it. 00:24:10.098 [2024-04-24 22:15:52.294168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.294335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.294363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.098 qpair failed and we were unable to recover it. 00:24:10.098 [2024-04-24 22:15:52.294541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.294725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.294757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.098 qpair failed and we were unable to recover it. 00:24:10.098 [2024-04-24 22:15:52.294978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.295175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.295202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.098 qpair failed and we were unable to recover it. 00:24:10.098 [2024-04-24 22:15:52.295410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.295581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.295609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.098 qpair failed and we were unable to recover it. 00:24:10.098 [2024-04-24 22:15:52.295813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.296013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.296040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.098 qpair failed and we were unable to recover it. 00:24:10.098 [2024-04-24 22:15:52.296212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.098 [2024-04-24 22:15:52.296352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.296379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.099 qpair failed and we were unable to recover it. 00:24:10.099 [2024-04-24 22:15:52.296600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.296737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.296764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.099 qpair failed and we were unable to recover it. 00:24:10.099 [2024-04-24 22:15:52.296911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.297109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.297136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.099 qpair failed and we were unable to recover it. 00:24:10.099 [2024-04-24 22:15:52.297378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.297549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.297576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.099 qpair failed and we were unable to recover it. 00:24:10.099 [2024-04-24 22:15:52.297708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.297922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.297949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.099 qpair failed and we were unable to recover it. 00:24:10.099 [2024-04-24 22:15:52.298154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.298318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.298345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.099 qpair failed and we were unable to recover it. 00:24:10.099 [2024-04-24 22:15:52.298486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.298663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.298695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.099 qpair failed and we were unable to recover it. 00:24:10.099 [2024-04-24 22:15:52.298868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.299052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.299079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.099 qpair failed and we were unable to recover it. 00:24:10.099 [2024-04-24 22:15:52.299284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.299484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.299513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.099 qpair failed and we were unable to recover it. 00:24:10.099 [2024-04-24 22:15:52.299688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.299827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.299855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.099 qpair failed and we were unable to recover it. 00:24:10.099 [2024-04-24 22:15:52.299996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.300141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.300169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.099 qpair failed and we were unable to recover it. 00:24:10.099 [2024-04-24 22:15:52.300367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.300555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.300583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.099 qpair failed and we were unable to recover it. 00:24:10.099 [2024-04-24 22:15:52.300793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.300941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.300968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.099 qpair failed and we were unable to recover it. 00:24:10.099 [2024-04-24 22:15:52.301169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.301372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.301405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.099 qpair failed and we were unable to recover it. 00:24:10.099 [2024-04-24 22:15:52.301626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.301833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.301861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.099 qpair failed and we were unable to recover it. 00:24:10.099 [2024-04-24 22:15:52.302031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.302256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.302283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.099 qpair failed and we were unable to recover it. 00:24:10.099 [2024-04-24 22:15:52.302476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.302670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.302702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.099 qpair failed and we were unable to recover it. 00:24:10.099 [2024-04-24 22:15:52.302909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.303126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.303153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.099 qpair failed and we were unable to recover it. 00:24:10.099 [2024-04-24 22:15:52.303341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.303506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.303538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.099 qpair failed and we were unable to recover it. 00:24:10.099 [2024-04-24 22:15:52.303710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.303916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.303942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.099 qpair failed and we were unable to recover it. 00:24:10.099 [2024-04-24 22:15:52.304097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.304254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.304281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.099 qpair failed and we were unable to recover it. 00:24:10.099 [2024-04-24 22:15:52.304467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.304668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.304696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.099 qpair failed and we were unable to recover it. 00:24:10.099 [2024-04-24 22:15:52.304868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.305028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.305055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.099 qpair failed and we were unable to recover it. 00:24:10.099 [2024-04-24 22:15:52.305271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.305467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.305496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.099 qpair failed and we were unable to recover it. 00:24:10.099 [2024-04-24 22:15:52.305702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.305901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.305928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.099 qpair failed and we were unable to recover it. 00:24:10.099 [2024-04-24 22:15:52.306137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.306321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.306348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.099 qpair failed and we were unable to recover it. 00:24:10.099 [2024-04-24 22:15:52.306558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.306722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.306754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.099 qpair failed and we were unable to recover it. 00:24:10.099 [2024-04-24 22:15:52.306947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.307113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.307140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.099 qpair failed and we were unable to recover it. 00:24:10.099 [2024-04-24 22:15:52.307358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.307560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.099 [2024-04-24 22:15:52.307595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.099 qpair failed and we were unable to recover it. 00:24:10.099 [2024-04-24 22:15:52.307798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.100 [2024-04-24 22:15:52.307958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.100 [2024-04-24 22:15:52.307985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.100 qpair failed and we were unable to recover it. 00:24:10.100 [2024-04-24 22:15:52.308192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.100 [2024-04-24 22:15:52.308401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.100 [2024-04-24 22:15:52.308429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.100 qpair failed and we were unable to recover it. 00:24:10.100 [2024-04-24 22:15:52.308628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.100 [2024-04-24 22:15:52.308757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.100 [2024-04-24 22:15:52.308784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.100 qpair failed and we were unable to recover it. 00:24:10.100 [2024-04-24 22:15:52.308948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.100 [2024-04-24 22:15:52.309118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.100 [2024-04-24 22:15:52.309145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.100 qpair failed and we were unable to recover it. 00:24:10.100 [2024-04-24 22:15:52.309328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.100 [2024-04-24 22:15:52.309495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.100 [2024-04-24 22:15:52.309524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.100 qpair failed and we were unable to recover it. 00:24:10.100 [2024-04-24 22:15:52.309721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.100 [2024-04-24 22:15:52.309925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.100 [2024-04-24 22:15:52.309952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.100 qpair failed and we were unable to recover it. 00:24:10.100 [2024-04-24 22:15:52.310150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.100 [2024-04-24 22:15:52.310345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.100 [2024-04-24 22:15:52.310372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.100 qpair failed and we were unable to recover it. 00:24:10.100 [2024-04-24 22:15:52.310600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.100 [2024-04-24 22:15:52.310817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.100 [2024-04-24 22:15:52.310844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.100 qpair failed and we were unable to recover it. 00:24:10.373 [2024-04-24 22:15:52.311014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.373 [2024-04-24 22:15:52.311191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.373 [2024-04-24 22:15:52.311219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.373 qpair failed and we were unable to recover it. 00:24:10.373 [2024-04-24 22:15:52.311399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.373 [2024-04-24 22:15:52.311576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.373 [2024-04-24 22:15:52.311604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.373 qpair failed and we were unable to recover it. 00:24:10.373 [2024-04-24 22:15:52.311756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.373 [2024-04-24 22:15:52.311887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.373 [2024-04-24 22:15:52.311914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.373 qpair failed and we were unable to recover it. 00:24:10.373 [2024-04-24 22:15:52.312102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.373 [2024-04-24 22:15:52.312278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.373 [2024-04-24 22:15:52.312305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.373 qpair failed and we were unable to recover it. 00:24:10.373 [2024-04-24 22:15:52.312520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.373 [2024-04-24 22:15:52.312693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.373 [2024-04-24 22:15:52.312720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.373 qpair failed and we were unable to recover it. 00:24:10.373 [2024-04-24 22:15:52.312885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.373 [2024-04-24 22:15:52.313062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.373 [2024-04-24 22:15:52.313089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.373 qpair failed and we were unable to recover it. 00:24:10.373 [2024-04-24 22:15:52.313272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.373 [2024-04-24 22:15:52.313481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.373 [2024-04-24 22:15:52.313517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.373 qpair failed and we were unable to recover it. 00:24:10.373 [2024-04-24 22:15:52.313733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.373 [2024-04-24 22:15:52.313903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.373 [2024-04-24 22:15:52.313930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.373 qpair failed and we were unable to recover it. 00:24:10.373 [2024-04-24 22:15:52.314105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.373 [2024-04-24 22:15:52.314234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.373 [2024-04-24 22:15:52.314262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.373 qpair failed and we were unable to recover it. 00:24:10.373 [2024-04-24 22:15:52.314456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.373 [2024-04-24 22:15:52.314652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.373 [2024-04-24 22:15:52.314679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.373 qpair failed and we were unable to recover it. 00:24:10.373 [2024-04-24 22:15:52.314906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.373 [2024-04-24 22:15:52.315071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.315099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.374 qpair failed and we were unable to recover it. 00:24:10.374 [2024-04-24 22:15:52.315297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.315483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.315511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.374 qpair failed and we were unable to recover it. 00:24:10.374 [2024-04-24 22:15:52.315708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.315886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.315913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.374 qpair failed and we were unable to recover it. 00:24:10.374 [2024-04-24 22:15:52.316109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.316285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.316312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.374 qpair failed and we were unable to recover it. 00:24:10.374 [2024-04-24 22:15:52.316480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.316684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.316711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.374 qpair failed and we were unable to recover it. 00:24:10.374 [2024-04-24 22:15:52.316927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.317115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.317142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.374 qpair failed and we were unable to recover it. 00:24:10.374 [2024-04-24 22:15:52.317344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.317508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.317536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.374 qpair failed and we were unable to recover it. 00:24:10.374 [2024-04-24 22:15:52.317751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.317901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.317928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.374 qpair failed and we were unable to recover it. 00:24:10.374 [2024-04-24 22:15:52.318139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.318342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.318369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.374 qpair failed and we were unable to recover it. 00:24:10.374 [2024-04-24 22:15:52.318598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.318785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.318812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.374 qpair failed and we were unable to recover it. 00:24:10.374 [2024-04-24 22:15:52.319036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.319248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.319275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.374 qpair failed and we were unable to recover it. 00:24:10.374 [2024-04-24 22:15:52.319504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.319688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.319715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.374 qpair failed and we were unable to recover it. 00:24:10.374 [2024-04-24 22:15:52.319869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.320005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.320031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.374 qpair failed and we were unable to recover it. 00:24:10.374 [2024-04-24 22:15:52.320217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.320422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.320450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.374 qpair failed and we were unable to recover it. 00:24:10.374 [2024-04-24 22:15:52.320617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.320794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.320821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.374 qpair failed and we were unable to recover it. 00:24:10.374 [2024-04-24 22:15:52.320999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.321202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.321229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.374 qpair failed and we were unable to recover it. 00:24:10.374 [2024-04-24 22:15:52.321448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.321647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.321674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.374 qpair failed and we were unable to recover it. 00:24:10.374 [2024-04-24 22:15:52.321874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.322039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.322066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.374 qpair failed and we were unable to recover it. 00:24:10.374 [2024-04-24 22:15:52.322239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.322375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.322411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.374 qpair failed and we were unable to recover it. 00:24:10.374 [2024-04-24 22:15:52.322619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.322754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.322782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.374 qpair failed and we were unable to recover it. 00:24:10.374 [2024-04-24 22:15:52.322989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.323155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.323183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.374 qpair failed and we were unable to recover it. 00:24:10.374 [2024-04-24 22:15:52.323380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.323595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.323622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.374 qpair failed and we were unable to recover it. 00:24:10.374 [2024-04-24 22:15:52.323838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.323993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.324026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.374 qpair failed and we were unable to recover it. 00:24:10.374 [2024-04-24 22:15:52.324201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.324364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.324391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.374 qpair failed and we were unable to recover it. 00:24:10.374 [2024-04-24 22:15:52.324601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.324761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.324793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.374 qpair failed and we were unable to recover it. 00:24:10.374 [2024-04-24 22:15:52.325022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.325181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.325208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.374 qpair failed and we were unable to recover it. 00:24:10.374 [2024-04-24 22:15:52.325409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.325595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.325622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.374 qpair failed and we were unable to recover it. 00:24:10.374 [2024-04-24 22:15:52.325834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.325980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.374 [2024-04-24 22:15:52.326007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.374 qpair failed and we were unable to recover it. 00:24:10.374 [2024-04-24 22:15:52.326213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.326410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.326439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.375 qpair failed and we were unable to recover it. 00:24:10.375 [2024-04-24 22:15:52.326637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.326836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.326863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.375 qpair failed and we were unable to recover it. 00:24:10.375 [2024-04-24 22:15:52.327052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.327221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.327248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.375 qpair failed and we were unable to recover it. 00:24:10.375 [2024-04-24 22:15:52.327415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.327580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.327607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.375 qpair failed and we were unable to recover it. 00:24:10.375 [2024-04-24 22:15:52.327854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.328054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.328082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.375 qpair failed and we were unable to recover it. 00:24:10.375 [2024-04-24 22:15:52.328321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.328560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.328588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.375 qpair failed and we were unable to recover it. 00:24:10.375 [2024-04-24 22:15:52.328790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.328961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.328988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.375 qpair failed and we were unable to recover it. 00:24:10.375 [2024-04-24 22:15:52.329192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.329408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.329437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.375 qpair failed and we were unable to recover it. 00:24:10.375 [2024-04-24 22:15:52.329590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.329718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.329746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.375 qpair failed and we were unable to recover it. 00:24:10.375 [2024-04-24 22:15:52.329954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.330118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.330146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.375 qpair failed and we were unable to recover it. 00:24:10.375 [2024-04-24 22:15:52.330319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.330517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.330545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.375 qpair failed and we were unable to recover it. 00:24:10.375 [2024-04-24 22:15:52.330782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.330973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.331000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.375 qpair failed and we were unable to recover it. 00:24:10.375 [2024-04-24 22:15:52.331246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.331377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.331412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.375 qpair failed and we were unable to recover it. 00:24:10.375 [2024-04-24 22:15:52.331553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.331744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.331771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.375 qpair failed and we were unable to recover it. 00:24:10.375 [2024-04-24 22:15:52.331986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.332156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.332183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.375 qpair failed and we were unable to recover it. 00:24:10.375 [2024-04-24 22:15:52.332380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.332586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.332613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.375 qpair failed and we were unable to recover it. 00:24:10.375 [2024-04-24 22:15:52.332821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.332981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.333008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.375 qpair failed and we were unable to recover it. 00:24:10.375 [2024-04-24 22:15:52.333168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.333363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.333390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.375 qpair failed and we were unable to recover it. 00:24:10.375 [2024-04-24 22:15:52.333605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.333815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.333842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.375 qpair failed and we were unable to recover it. 00:24:10.375 [2024-04-24 22:15:52.334051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.334230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.334257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.375 qpair failed and we were unable to recover it. 00:24:10.375 [2024-04-24 22:15:52.334468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.334605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.334633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.375 qpair failed and we were unable to recover it. 00:24:10.375 [2024-04-24 22:15:52.334763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.334926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.334954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.375 qpair failed and we were unable to recover it. 00:24:10.375 [2024-04-24 22:15:52.335089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.335226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.335253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.375 qpair failed and we were unable to recover it. 00:24:10.375 [2024-04-24 22:15:52.335417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.335616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.335643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.375 qpair failed and we were unable to recover it. 00:24:10.375 [2024-04-24 22:15:52.335832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.335986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.336013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.375 qpair failed and we were unable to recover it. 00:24:10.375 [2024-04-24 22:15:52.336145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.336332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.336359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.375 qpair failed and we were unable to recover it. 00:24:10.375 [2024-04-24 22:15:52.336557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.336747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.336774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.375 qpair failed and we were unable to recover it. 00:24:10.375 [2024-04-24 22:15:52.336920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.337105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.375 [2024-04-24 22:15:52.337132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.375 qpair failed and we were unable to recover it. 00:24:10.375 [2024-04-24 22:15:52.337315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.337503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.337532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.376 qpair failed and we were unable to recover it. 00:24:10.376 [2024-04-24 22:15:52.337677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.337833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.337860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.376 qpair failed and we were unable to recover it. 00:24:10.376 [2024-04-24 22:15:52.338019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.338181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.338208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.376 qpair failed and we were unable to recover it. 00:24:10.376 [2024-04-24 22:15:52.338370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.338615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.338643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.376 qpair failed and we were unable to recover it. 00:24:10.376 [2024-04-24 22:15:52.338835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.339075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.339103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.376 qpair failed and we were unable to recover it. 00:24:10.376 [2024-04-24 22:15:52.339291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.339416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.339445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.376 qpair failed and we were unable to recover it. 00:24:10.376 [2024-04-24 22:15:52.339606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.339764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.339791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.376 qpair failed and we were unable to recover it. 00:24:10.376 [2024-04-24 22:15:52.339917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.340104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.340131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.376 qpair failed and we were unable to recover it. 00:24:10.376 [2024-04-24 22:15:52.340326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.340480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.340508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.376 qpair failed and we were unable to recover it. 00:24:10.376 [2024-04-24 22:15:52.340673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.340828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.340855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.376 qpair failed and we were unable to recover it. 00:24:10.376 [2024-04-24 22:15:52.341009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.341207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.341235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.376 qpair failed and we were unable to recover it. 00:24:10.376 [2024-04-24 22:15:52.341437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.341626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.341654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.376 qpair failed and we were unable to recover it. 00:24:10.376 [2024-04-24 22:15:52.341812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.341999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.342026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.376 qpair failed and we were unable to recover it. 00:24:10.376 [2024-04-24 22:15:52.342184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.342362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.342389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.376 qpair failed and we were unable to recover it. 00:24:10.376 [2024-04-24 22:15:52.342562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.342728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.342756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.376 qpair failed and we were unable to recover it. 00:24:10.376 [2024-04-24 22:15:52.342879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.343043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.343070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.376 qpair failed and we were unable to recover it. 00:24:10.376 [2024-04-24 22:15:52.343227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.343414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.343442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.376 qpair failed and we were unable to recover it. 00:24:10.376 [2024-04-24 22:15:52.343623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.343752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.343779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.376 qpair failed and we were unable to recover it. 00:24:10.376 [2024-04-24 22:15:52.344020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.344213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.344240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.376 qpair failed and we were unable to recover it. 00:24:10.376 [2024-04-24 22:15:52.344435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.344593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.344621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.376 qpair failed and we were unable to recover it. 00:24:10.376 [2024-04-24 22:15:52.344777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.344933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.344961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.376 qpair failed and we were unable to recover it. 00:24:10.376 [2024-04-24 22:15:52.345093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.345268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.345295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.376 qpair failed and we were unable to recover it. 00:24:10.376 [2024-04-24 22:15:52.345485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.345666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.345693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.376 qpair failed and we were unable to recover it. 00:24:10.376 [2024-04-24 22:15:52.345867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.345996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.346023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.376 qpair failed and we were unable to recover it. 00:24:10.376 [2024-04-24 22:15:52.346180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.346354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.346381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.376 qpair failed and we were unable to recover it. 00:24:10.376 [2024-04-24 22:15:52.346579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.346740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.346768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.376 qpair failed and we were unable to recover it. 00:24:10.376 [2024-04-24 22:15:52.346953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.347136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.347163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.376 qpair failed and we were unable to recover it. 00:24:10.376 [2024-04-24 22:15:52.347353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.347497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.376 [2024-04-24 22:15:52.347525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.376 qpair failed and we were unable to recover it. 00:24:10.376 [2024-04-24 22:15:52.347722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.347881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.347908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.377 qpair failed and we were unable to recover it. 00:24:10.377 [2024-04-24 22:15:52.348069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.348228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.348256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.377 qpair failed and we were unable to recover it. 00:24:10.377 [2024-04-24 22:15:52.348419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.348606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.348634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.377 qpair failed and we were unable to recover it. 00:24:10.377 [2024-04-24 22:15:52.348770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.348938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.348965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.377 qpair failed and we were unable to recover it. 00:24:10.377 [2024-04-24 22:15:52.349152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.349310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.349337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.377 qpair failed and we were unable to recover it. 00:24:10.377 [2024-04-24 22:15:52.349474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.349633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.349660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.377 qpair failed and we were unable to recover it. 00:24:10.377 [2024-04-24 22:15:52.349815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.349975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.350003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.377 qpair failed and we were unable to recover it. 00:24:10.377 [2024-04-24 22:15:52.350190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.350345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.350372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.377 qpair failed and we were unable to recover it. 00:24:10.377 [2024-04-24 22:15:52.350569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.350734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.350761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.377 qpair failed and we were unable to recover it. 00:24:10.377 [2024-04-24 22:15:52.350911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.351033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.351060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.377 qpair failed and we were unable to recover it. 00:24:10.377 [2024-04-24 22:15:52.351221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.351370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.351411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.377 qpair failed and we were unable to recover it. 00:24:10.377 [2024-04-24 22:15:52.351601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.351753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.351781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.377 qpair failed and we were unable to recover it. 00:24:10.377 [2024-04-24 22:15:52.351936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.352071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.352098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.377 qpair failed and we were unable to recover it. 00:24:10.377 [2024-04-24 22:15:52.352233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.352388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.352431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.377 qpair failed and we were unable to recover it. 00:24:10.377 [2024-04-24 22:15:52.352553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.352709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.352737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.377 qpair failed and we were unable to recover it. 00:24:10.377 [2024-04-24 22:15:52.352894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.353058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.353085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.377 qpair failed and we were unable to recover it. 00:24:10.377 [2024-04-24 22:15:52.353247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.353411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.353445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.377 qpair failed and we were unable to recover it. 00:24:10.377 [2024-04-24 22:15:52.353600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.353783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.353810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.377 qpair failed and we were unable to recover it. 00:24:10.377 [2024-04-24 22:15:52.353995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.354177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.354204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.377 qpair failed and we were unable to recover it. 00:24:10.377 [2024-04-24 22:15:52.354354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.354495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.354523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.377 qpair failed and we were unable to recover it. 00:24:10.377 [2024-04-24 22:15:52.354684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.354843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.354870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.377 qpair failed and we were unable to recover it. 00:24:10.377 [2024-04-24 22:15:52.355003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.355157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.355185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.377 qpair failed and we were unable to recover it. 00:24:10.377 [2024-04-24 22:15:52.355372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.355555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.355583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.377 qpair failed and we were unable to recover it. 00:24:10.377 [2024-04-24 22:15:52.355784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.355938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.355965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.377 qpair failed and we were unable to recover it. 00:24:10.377 [2024-04-24 22:15:52.356162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.356293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.356320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.377 qpair failed and we were unable to recover it. 00:24:10.377 [2024-04-24 22:15:52.356450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.356641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.356668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.377 qpair failed and we were unable to recover it. 00:24:10.377 [2024-04-24 22:15:52.356860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.357021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.357053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.377 qpair failed and we were unable to recover it. 00:24:10.377 [2024-04-24 22:15:52.357208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.357370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.357406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.377 qpair failed and we were unable to recover it. 00:24:10.377 [2024-04-24 22:15:52.357565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.377 [2024-04-24 22:15:52.357725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.357752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.378 qpair failed and we were unable to recover it. 00:24:10.378 [2024-04-24 22:15:52.357917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.358069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.358095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.378 qpair failed and we were unable to recover it. 00:24:10.378 [2024-04-24 22:15:52.358256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.358387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.358423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.378 qpair failed and we were unable to recover it. 00:24:10.378 [2024-04-24 22:15:52.358578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.358741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.358768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.378 qpair failed and we were unable to recover it. 00:24:10.378 [2024-04-24 22:15:52.358920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.359078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.359105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.378 qpair failed and we were unable to recover it. 00:24:10.378 [2024-04-24 22:15:52.359263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.359415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.359443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.378 qpair failed and we were unable to recover it. 00:24:10.378 [2024-04-24 22:15:52.359566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.359702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.359729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.378 qpair failed and we were unable to recover it. 00:24:10.378 [2024-04-24 22:15:52.359901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.360064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.360091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.378 qpair failed and we were unable to recover it. 00:24:10.378 [2024-04-24 22:15:52.360235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.360392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.360441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.378 qpair failed and we were unable to recover it. 00:24:10.378 [2024-04-24 22:15:52.360611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.360795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.360823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.378 qpair failed and we were unable to recover it. 00:24:10.378 [2024-04-24 22:15:52.360949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.361120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.361147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.378 qpair failed and we were unable to recover it. 00:24:10.378 [2024-04-24 22:15:52.361347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.361518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.361546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.378 qpair failed and we were unable to recover it. 00:24:10.378 [2024-04-24 22:15:52.361709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.361899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.361926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.378 qpair failed and we were unable to recover it. 00:24:10.378 [2024-04-24 22:15:52.362109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.362266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.362293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.378 qpair failed and we were unable to recover it. 00:24:10.378 [2024-04-24 22:15:52.362462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.362618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.362644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.378 qpair failed and we were unable to recover it. 00:24:10.378 [2024-04-24 22:15:52.362805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.362991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.363018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.378 qpair failed and we were unable to recover it. 00:24:10.378 [2024-04-24 22:15:52.363172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.363357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.363384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.378 qpair failed and we were unable to recover it. 00:24:10.378 [2024-04-24 22:15:52.363555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.363690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.363717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.378 qpair failed and we were unable to recover it. 00:24:10.378 [2024-04-24 22:15:52.363871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.364031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.364063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.378 qpair failed and we were unable to recover it. 00:24:10.378 [2024-04-24 22:15:52.364188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.364352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.364379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.378 qpair failed and we were unable to recover it. 00:24:10.378 [2024-04-24 22:15:52.364596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.364788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.364815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.378 qpair failed and we were unable to recover it. 00:24:10.378 [2024-04-24 22:15:52.365007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.365197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.365224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.378 qpair failed and we were unable to recover it. 00:24:10.378 [2024-04-24 22:15:52.365366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.378 [2024-04-24 22:15:52.365509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.365546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.379 qpair failed and we were unable to recover it. 00:24:10.379 [2024-04-24 22:15:52.365756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.365951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.365978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.379 qpair failed and we were unable to recover it. 00:24:10.379 [2024-04-24 22:15:52.366134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.366325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.366352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.379 qpair failed and we were unable to recover it. 00:24:10.379 [2024-04-24 22:15:52.366514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.366676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.366703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.379 qpair failed and we were unable to recover it. 00:24:10.379 [2024-04-24 22:15:52.366887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.367018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.367045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.379 qpair failed and we were unable to recover it. 00:24:10.379 [2024-04-24 22:15:52.367199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.367359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.367385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.379 qpair failed and we were unable to recover it. 00:24:10.379 [2024-04-24 22:15:52.367532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.367702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.367728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.379 qpair failed and we were unable to recover it. 00:24:10.379 [2024-04-24 22:15:52.367921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.368104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.368132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.379 qpair failed and we were unable to recover it. 00:24:10.379 [2024-04-24 22:15:52.368296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.368484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.368513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.379 qpair failed and we were unable to recover it. 00:24:10.379 [2024-04-24 22:15:52.368650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.368807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.368834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.379 qpair failed and we were unable to recover it. 00:24:10.379 [2024-04-24 22:15:52.369005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.369186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.369214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.379 qpair failed and we were unable to recover it. 00:24:10.379 [2024-04-24 22:15:52.369350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.369511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.369539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.379 qpair failed and we were unable to recover it. 00:24:10.379 [2024-04-24 22:15:52.369681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.369804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.369831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.379 qpair failed and we were unable to recover it. 00:24:10.379 [2024-04-24 22:15:52.370016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.370167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.370194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.379 qpair failed and we were unable to recover it. 00:24:10.379 [2024-04-24 22:15:52.370386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.370561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.370589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.379 qpair failed and we were unable to recover it. 00:24:10.379 [2024-04-24 22:15:52.370760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.370944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.370971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.379 qpair failed and we were unable to recover it. 00:24:10.379 [2024-04-24 22:15:52.371122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.371283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.371310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.379 qpair failed and we were unable to recover it. 00:24:10.379 [2024-04-24 22:15:52.371479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.371667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.371695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.379 qpair failed and we were unable to recover it. 00:24:10.379 [2024-04-24 22:15:52.371858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.372017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.372044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.379 qpair failed and we were unable to recover it. 00:24:10.379 [2024-04-24 22:15:52.372202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.372326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.372353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.379 qpair failed and we were unable to recover it. 00:24:10.379 [2024-04-24 22:15:52.372506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.372634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.372662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.379 qpair failed and we were unable to recover it. 00:24:10.379 [2024-04-24 22:15:52.372823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.372958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.372985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.379 qpair failed and we were unable to recover it. 00:24:10.379 [2024-04-24 22:15:52.373107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.373261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.373289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.379 qpair failed and we were unable to recover it. 00:24:10.379 [2024-04-24 22:15:52.373451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.373627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.373654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.379 qpair failed and we were unable to recover it. 00:24:10.379 [2024-04-24 22:15:52.373821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.374011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.374038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.379 qpair failed and we were unable to recover it. 00:24:10.379 [2024-04-24 22:15:52.374233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.374400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.379 [2024-04-24 22:15:52.374428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.379 qpair failed and we were unable to recover it. 00:24:10.380 [2024-04-24 22:15:52.374616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.374765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.374792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.380 qpair failed and we were unable to recover it. 00:24:10.380 [2024-04-24 22:15:52.374926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.375113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.375141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.380 qpair failed and we were unable to recover it. 00:24:10.380 [2024-04-24 22:15:52.375329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.375511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.375540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.380 qpair failed and we were unable to recover it. 00:24:10.380 [2024-04-24 22:15:52.375695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.375831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.375858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.380 qpair failed and we were unable to recover it. 00:24:10.380 [2024-04-24 22:15:52.376018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.376200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.376227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.380 qpair failed and we were unable to recover it. 00:24:10.380 [2024-04-24 22:15:52.376420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.376547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.376574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.380 qpair failed and we were unable to recover it. 00:24:10.380 [2024-04-24 22:15:52.376733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.376891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.376918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.380 qpair failed and we were unable to recover it. 00:24:10.380 [2024-04-24 22:15:52.377060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.377211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.377238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.380 qpair failed and we were unable to recover it. 00:24:10.380 [2024-04-24 22:15:52.377402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.377524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.377552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.380 qpair failed and we were unable to recover it. 00:24:10.380 [2024-04-24 22:15:52.377707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.377893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.377920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.380 qpair failed and we were unable to recover it. 00:24:10.380 [2024-04-24 22:15:52.378051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.378204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.378231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.380 qpair failed and we were unable to recover it. 00:24:10.380 [2024-04-24 22:15:52.378422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.378584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.378611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.380 qpair failed and we were unable to recover it. 00:24:10.380 [2024-04-24 22:15:52.378805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.378958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.378985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.380 qpair failed and we were unable to recover it. 00:24:10.380 [2024-04-24 22:15:52.379138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.379290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.379317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.380 qpair failed and we were unable to recover it. 00:24:10.380 [2024-04-24 22:15:52.379475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.379643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.379670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.380 qpair failed and we were unable to recover it. 00:24:10.380 [2024-04-24 22:15:52.379830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.379985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.380012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.380 qpair failed and we were unable to recover it. 00:24:10.380 [2024-04-24 22:15:52.380173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.380328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.380355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.380 qpair failed and we were unable to recover it. 00:24:10.380 [2024-04-24 22:15:52.380531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.380664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.380691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.380 qpair failed and we were unable to recover it. 00:24:10.380 [2024-04-24 22:15:52.380863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.381026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.381053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.380 qpair failed and we were unable to recover it. 00:24:10.380 [2024-04-24 22:15:52.381213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.381367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.381402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.380 qpair failed and we were unable to recover it. 00:24:10.380 [2024-04-24 22:15:52.381593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.381780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.381808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.380 qpair failed and we were unable to recover it. 00:24:10.380 [2024-04-24 22:15:52.381936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.382102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.382129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.380 qpair failed and we were unable to recover it. 00:24:10.380 [2024-04-24 22:15:52.382262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.382419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.382447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.380 qpair failed and we were unable to recover it. 00:24:10.380 [2024-04-24 22:15:52.382604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.382788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.382814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.380 qpair failed and we were unable to recover it. 00:24:10.380 [2024-04-24 22:15:52.382977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.383105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.383132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.380 qpair failed and we were unable to recover it. 00:24:10.380 [2024-04-24 22:15:52.383284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.383468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.383497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.380 qpair failed and we were unable to recover it. 00:24:10.380 [2024-04-24 22:15:52.383660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.383820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.383847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.380 qpair failed and we were unable to recover it. 00:24:10.380 [2024-04-24 22:15:52.384006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.380 [2024-04-24 22:15:52.384168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.384195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.381 qpair failed and we were unable to recover it. 00:24:10.381 [2024-04-24 22:15:52.384385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.384559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.384587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.381 qpair failed and we were unable to recover it. 00:24:10.381 [2024-04-24 22:15:52.384743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.384887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.384914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.381 qpair failed and we were unable to recover it. 00:24:10.381 [2024-04-24 22:15:52.385075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.385236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.385263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.381 qpair failed and we were unable to recover it. 00:24:10.381 [2024-04-24 22:15:52.385402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.385589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.385616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.381 qpair failed and we were unable to recover it. 00:24:10.381 [2024-04-24 22:15:52.385741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.385931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.385958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.381 qpair failed and we were unable to recover it. 00:24:10.381 [2024-04-24 22:15:52.386116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.386269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.386297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.381 qpair failed and we were unable to recover it. 00:24:10.381 [2024-04-24 22:15:52.386438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.386590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.386617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.381 qpair failed and we were unable to recover it. 00:24:10.381 [2024-04-24 22:15:52.386776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.386973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.387000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.381 qpair failed and we were unable to recover it. 00:24:10.381 [2024-04-24 22:15:52.387158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.387343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.387370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.381 qpair failed and we were unable to recover it. 00:24:10.381 [2024-04-24 22:15:52.387544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.387703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.387731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.381 qpair failed and we were unable to recover it. 00:24:10.381 [2024-04-24 22:15:52.387891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.388075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.388102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.381 qpair failed and we were unable to recover it. 00:24:10.381 [2024-04-24 22:15:52.388263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.388422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.388451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.381 qpair failed and we were unable to recover it. 00:24:10.381 [2024-04-24 22:15:52.388623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.388805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.388833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.381 qpair failed and we were unable to recover it. 00:24:10.381 [2024-04-24 22:15:52.388996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.389170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.389197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.381 qpair failed and we were unable to recover it. 00:24:10.381 [2024-04-24 22:15:52.389383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.389555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.389582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.381 qpair failed and we were unable to recover it. 00:24:10.381 [2024-04-24 22:15:52.389762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.389912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.389939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.381 qpair failed and we were unable to recover it. 00:24:10.381 [2024-04-24 22:15:52.390100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.390260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.390287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.381 qpair failed and we were unable to recover it. 00:24:10.381 [2024-04-24 22:15:52.390457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.390648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.390675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.381 qpair failed and we were unable to recover it. 00:24:10.381 [2024-04-24 22:15:52.390840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.391001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.391028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.381 qpair failed and we were unable to recover it. 00:24:10.381 [2024-04-24 22:15:52.391210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.391338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.391365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.381 qpair failed and we were unable to recover it. 00:24:10.381 [2024-04-24 22:15:52.391552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.391707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.391734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.381 qpair failed and we were unable to recover it. 00:24:10.381 [2024-04-24 22:15:52.391920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.392079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.392106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.381 qpair failed and we were unable to recover it. 00:24:10.381 [2024-04-24 22:15:52.392239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.392404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.392433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.381 qpair failed and we were unable to recover it. 00:24:10.381 [2024-04-24 22:15:52.392624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.392746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.392774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.381 qpair failed and we were unable to recover it. 00:24:10.381 [2024-04-24 22:15:52.392928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.393124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.393152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.381 qpair failed and we were unable to recover it. 00:24:10.381 [2024-04-24 22:15:52.393345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.381 [2024-04-24 22:15:52.393514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.393542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.382 qpair failed and we were unable to recover it. 00:24:10.382 [2024-04-24 22:15:52.393733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.393891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.393918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.382 qpair failed and we were unable to recover it. 00:24:10.382 [2024-04-24 22:15:52.394081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.394237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.394264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.382 qpair failed and we were unable to recover it. 00:24:10.382 [2024-04-24 22:15:52.394426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.394557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.394585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.382 qpair failed and we were unable to recover it. 00:24:10.382 [2024-04-24 22:15:52.394750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.394915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.394943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.382 qpair failed and we were unable to recover it. 00:24:10.382 [2024-04-24 22:15:52.395108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.395295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.395322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.382 qpair failed and we were unable to recover it. 00:24:10.382 [2024-04-24 22:15:52.395453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.395619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.395647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.382 qpair failed and we were unable to recover it. 00:24:10.382 [2024-04-24 22:15:52.395790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.395947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.395974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.382 qpair failed and we were unable to recover it. 00:24:10.382 [2024-04-24 22:15:52.396128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.396259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.396286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.382 qpair failed and we were unable to recover it. 00:24:10.382 [2024-04-24 22:15:52.396458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.396629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.396657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.382 qpair failed and we were unable to recover it. 00:24:10.382 [2024-04-24 22:15:52.396781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.396967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.396994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.382 qpair failed and we were unable to recover it. 00:24:10.382 [2024-04-24 22:15:52.397151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.397312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.397340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.382 qpair failed and we were unable to recover it. 00:24:10.382 [2024-04-24 22:15:52.397464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.397627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.397654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.382 qpair failed and we were unable to recover it. 00:24:10.382 [2024-04-24 22:15:52.397839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.398022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.398048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.382 qpair failed and we were unable to recover it. 00:24:10.382 [2024-04-24 22:15:52.398227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.398410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.398438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.382 qpair failed and we were unable to recover it. 00:24:10.382 [2024-04-24 22:15:52.398599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.398778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.398805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.382 qpair failed and we were unable to recover it. 00:24:10.382 [2024-04-24 22:15:52.398993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.399129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.399157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.382 qpair failed and we were unable to recover it. 00:24:10.382 [2024-04-24 22:15:52.399311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.399468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.399497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.382 qpair failed and we were unable to recover it. 00:24:10.382 [2024-04-24 22:15:52.399650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.399813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.399841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.382 qpair failed and we were unable to recover it. 00:24:10.382 [2024-04-24 22:15:52.400029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.400192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.400219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.382 qpair failed and we were unable to recover it. 00:24:10.382 [2024-04-24 22:15:52.400349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.400512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.400541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.382 qpair failed and we were unable to recover it. 00:24:10.382 [2024-04-24 22:15:52.400725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.400854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.400881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.382 qpair failed and we were unable to recover it. 00:24:10.382 [2024-04-24 22:15:52.401040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.401225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.401252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.382 qpair failed and we were unable to recover it. 00:24:10.382 [2024-04-24 22:15:52.401379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.401552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.401580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.382 qpair failed and we were unable to recover it. 00:24:10.382 [2024-04-24 22:15:52.401743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.401876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.401904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.382 qpair failed and we were unable to recover it. 00:24:10.382 [2024-04-24 22:15:52.402089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.402276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.402304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.382 qpair failed and we were unable to recover it. 00:24:10.382 [2024-04-24 22:15:52.402431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.402590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.402618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.382 qpair failed and we were unable to recover it. 00:24:10.382 [2024-04-24 22:15:52.402773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.402931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.402960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.382 qpair failed and we were unable to recover it. 00:24:10.382 [2024-04-24 22:15:52.403146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.382 [2024-04-24 22:15:52.403341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.403369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.383 qpair failed and we were unable to recover it. 00:24:10.383 [2024-04-24 22:15:52.403557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.403743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.403771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.383 qpair failed and we were unable to recover it. 00:24:10.383 [2024-04-24 22:15:52.403930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.404090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.404118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.383 qpair failed and we were unable to recover it. 00:24:10.383 [2024-04-24 22:15:52.404276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.404435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.404463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.383 qpair failed and we were unable to recover it. 00:24:10.383 [2024-04-24 22:15:52.404622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.404806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.404833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.383 qpair failed and we were unable to recover it. 00:24:10.383 [2024-04-24 22:15:52.404997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.405133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.405161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.383 qpair failed and we were unable to recover it. 00:24:10.383 [2024-04-24 22:15:52.405348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.405513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.405542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.383 qpair failed and we were unable to recover it. 00:24:10.383 [2024-04-24 22:15:52.405701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.405830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.405857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.383 qpair failed and we were unable to recover it. 00:24:10.383 [2024-04-24 22:15:52.406022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.406176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.406203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.383 qpair failed and we were unable to recover it. 00:24:10.383 [2024-04-24 22:15:52.406356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.406496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.406523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.383 qpair failed and we were unable to recover it. 00:24:10.383 [2024-04-24 22:15:52.406682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.406841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.406869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.383 qpair failed and we were unable to recover it. 00:24:10.383 [2024-04-24 22:15:52.407048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.407232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.407259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.383 qpair failed and we were unable to recover it. 00:24:10.383 [2024-04-24 22:15:52.407451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.407595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.407623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.383 qpair failed and we were unable to recover it. 00:24:10.383 [2024-04-24 22:15:52.407805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.407932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.407960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.383 qpair failed and we were unable to recover it. 00:24:10.383 [2024-04-24 22:15:52.408137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.408319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.408346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.383 qpair failed and we were unable to recover it. 00:24:10.383 [2024-04-24 22:15:52.408509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.408697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.408724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.383 qpair failed and we were unable to recover it. 00:24:10.383 [2024-04-24 22:15:52.408915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.409042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.409069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.383 qpair failed and we were unable to recover it. 00:24:10.383 [2024-04-24 22:15:52.409232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.409418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.409446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.383 qpair failed and we were unable to recover it. 00:24:10.383 [2024-04-24 22:15:52.409641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.409804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.409831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.383 qpair failed and we were unable to recover it. 00:24:10.383 [2024-04-24 22:15:52.410031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.410228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.410255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.383 qpair failed and we were unable to recover it. 00:24:10.383 [2024-04-24 22:15:52.410446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.410605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.410637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.383 qpair failed and we were unable to recover it. 00:24:10.383 [2024-04-24 22:15:52.410774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.383 [2024-04-24 22:15:52.410930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.410958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.384 qpair failed and we were unable to recover it. 00:24:10.384 [2024-04-24 22:15:52.411114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.411273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.411301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.384 qpair failed and we were unable to recover it. 00:24:10.384 [2024-04-24 22:15:52.411468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.411619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.411646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.384 qpair failed and we were unable to recover it. 00:24:10.384 [2024-04-24 22:15:52.411831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.411970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.411998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.384 qpair failed and we were unable to recover it. 00:24:10.384 [2024-04-24 22:15:52.412181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.412371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.412418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.384 qpair failed and we were unable to recover it. 00:24:10.384 [2024-04-24 22:15:52.412598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.412775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.412803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.384 qpair failed and we were unable to recover it. 00:24:10.384 [2024-04-24 22:15:52.413001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.413198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.413225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.384 qpair failed and we were unable to recover it. 00:24:10.384 [2024-04-24 22:15:52.413426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.413590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.413617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.384 qpair failed and we were unable to recover it. 00:24:10.384 [2024-04-24 22:15:52.413792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.414022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.414050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.384 qpair failed and we were unable to recover it. 00:24:10.384 [2024-04-24 22:15:52.414382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.414565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.414598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.384 qpair failed and we were unable to recover it. 00:24:10.384 [2024-04-24 22:15:52.414771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.414940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.414967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.384 qpair failed and we were unable to recover it. 00:24:10.384 [2024-04-24 22:15:52.415129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.415313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.415340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.384 qpair failed and we were unable to recover it. 00:24:10.384 [2024-04-24 22:15:52.415469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.415611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.415638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.384 qpair failed and we were unable to recover it. 00:24:10.384 [2024-04-24 22:15:52.415768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.415936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.415963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.384 qpair failed and we were unable to recover it. 00:24:10.384 [2024-04-24 22:15:52.416175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.416338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.416365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.384 qpair failed and we were unable to recover it. 00:24:10.384 [2024-04-24 22:15:52.416561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.416771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.416799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.384 qpair failed and we were unable to recover it. 00:24:10.384 [2024-04-24 22:15:52.417021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.417188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.417214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.384 qpair failed and we were unable to recover it. 00:24:10.384 [2024-04-24 22:15:52.417411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.417583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.417610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.384 qpair failed and we were unable to recover it. 00:24:10.384 [2024-04-24 22:15:52.417782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.417953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.417980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.384 qpair failed and we were unable to recover it. 00:24:10.384 [2024-04-24 22:15:52.418157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.418322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.418355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.384 qpair failed and we were unable to recover it. 00:24:10.384 [2024-04-24 22:15:52.418569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.418759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.418786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.384 qpair failed and we were unable to recover it. 00:24:10.384 [2024-04-24 22:15:52.418913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.419111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.419138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.384 qpair failed and we were unable to recover it. 00:24:10.384 [2024-04-24 22:15:52.419294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.419502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.419530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.384 qpair failed and we were unable to recover it. 00:24:10.384 [2024-04-24 22:15:52.419732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.419942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.419969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.384 qpair failed and we were unable to recover it. 00:24:10.384 [2024-04-24 22:15:52.420151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.420372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.420419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.384 qpair failed and we were unable to recover it. 00:24:10.384 [2024-04-24 22:15:52.420593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.420777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.420805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.384 qpair failed and we were unable to recover it. 00:24:10.384 [2024-04-24 22:15:52.420980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.421197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.421224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.384 qpair failed and we were unable to recover it. 00:24:10.384 [2024-04-24 22:15:52.421490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.421680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.421707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.384 qpair failed and we were unable to recover it. 00:24:10.384 [2024-04-24 22:15:52.421915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.422082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.384 [2024-04-24 22:15:52.422109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.385 qpair failed and we were unable to recover it. 00:24:10.385 [2024-04-24 22:15:52.422229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.422407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.422439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.385 qpair failed and we were unable to recover it. 00:24:10.385 [2024-04-24 22:15:52.422607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.422774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.422801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.385 qpair failed and we were unable to recover it. 00:24:10.385 [2024-04-24 22:15:52.422998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.423161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.423188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.385 qpair failed and we were unable to recover it. 00:24:10.385 [2024-04-24 22:15:52.423387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.423557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.423584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.385 qpair failed and we were unable to recover it. 00:24:10.385 [2024-04-24 22:15:52.423747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.423930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.423957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.385 qpair failed and we were unable to recover it. 00:24:10.385 [2024-04-24 22:15:52.424123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.424290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.424317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.385 qpair failed and we were unable to recover it. 00:24:10.385 [2024-04-24 22:15:52.424530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.424715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.424742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.385 qpair failed and we were unable to recover it. 00:24:10.385 [2024-04-24 22:15:52.424949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.425120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.425147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.385 qpair failed and we were unable to recover it. 00:24:10.385 [2024-04-24 22:15:52.425344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.425510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.425538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.385 qpair failed and we were unable to recover it. 00:24:10.385 [2024-04-24 22:15:52.425719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.425904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.425931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.385 qpair failed and we were unable to recover it. 00:24:10.385 [2024-04-24 22:15:52.426128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.426299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.426326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.385 qpair failed and we were unable to recover it. 00:24:10.385 [2024-04-24 22:15:52.426542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.426754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.426781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.385 qpair failed and we were unable to recover it. 00:24:10.385 [2024-04-24 22:15:52.426955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.427167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.427194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.385 qpair failed and we were unable to recover it. 00:24:10.385 [2024-04-24 22:15:52.427381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.427560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.427587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.385 qpair failed and we were unable to recover it. 00:24:10.385 [2024-04-24 22:15:52.427788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.427971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.427999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.385 qpair failed and we were unable to recover it. 00:24:10.385 [2024-04-24 22:15:52.428161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.428375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.428421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.385 qpair failed and we were unable to recover it. 00:24:10.385 [2024-04-24 22:15:52.428605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.428771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.428799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.385 qpair failed and we were unable to recover it. 00:24:10.385 [2024-04-24 22:15:52.428995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.429130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.429157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.385 qpair failed and we were unable to recover it. 00:24:10.385 [2024-04-24 22:15:52.429368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.429532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.429560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.385 qpair failed and we were unable to recover it. 00:24:10.385 [2024-04-24 22:15:52.429759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.429956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.429983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.385 qpair failed and we were unable to recover it. 00:24:10.385 [2024-04-24 22:15:52.430153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.430319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.430346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.385 qpair failed and we were unable to recover it. 00:24:10.385 [2024-04-24 22:15:52.430558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.430732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.430759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.385 qpair failed and we were unable to recover it. 00:24:10.385 [2024-04-24 22:15:52.430934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.431095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.431122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.385 qpair failed and we were unable to recover it. 00:24:10.385 [2024-04-24 22:15:52.431333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.431539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.431568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.385 qpair failed and we were unable to recover it. 00:24:10.385 [2024-04-24 22:15:52.431767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.431992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.432018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.385 qpair failed and we were unable to recover it. 00:24:10.385 [2024-04-24 22:15:52.432188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.432406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.432433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.385 qpair failed and we were unable to recover it. 00:24:10.385 [2024-04-24 22:15:52.432614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.432813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.432840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.385 qpair failed and we were unable to recover it. 00:24:10.385 [2024-04-24 22:15:52.432990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.433176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.385 [2024-04-24 22:15:52.433203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.385 qpair failed and we were unable to recover it. 00:24:10.386 [2024-04-24 22:15:52.433381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.433522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.433550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.386 qpair failed and we were unable to recover it. 00:24:10.386 [2024-04-24 22:15:52.433717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.433840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.433867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.386 qpair failed and we were unable to recover it. 00:24:10.386 [2024-04-24 22:15:52.434091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.434258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.434285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.386 qpair failed and we were unable to recover it. 00:24:10.386 [2024-04-24 22:15:52.434484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.434652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.434679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.386 qpair failed and we were unable to recover it. 00:24:10.386 [2024-04-24 22:15:52.434849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.435018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.435045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.386 qpair failed and we were unable to recover it. 00:24:10.386 [2024-04-24 22:15:52.435249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.435427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.435456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.386 qpair failed and we were unable to recover it. 00:24:10.386 [2024-04-24 22:15:52.435644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.435820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.435847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.386 qpair failed and we were unable to recover it. 00:24:10.386 [2024-04-24 22:15:52.436010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.436177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.436203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.386 qpair failed and we were unable to recover it. 00:24:10.386 [2024-04-24 22:15:52.436417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.436578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.436606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.386 qpair failed and we were unable to recover it. 00:24:10.386 [2024-04-24 22:15:52.436789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.436989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.437016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.386 qpair failed and we were unable to recover it. 00:24:10.386 [2024-04-24 22:15:52.437244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.437428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.437456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.386 qpair failed and we were unable to recover it. 00:24:10.386 [2024-04-24 22:15:52.437590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.437794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.437821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.386 qpair failed and we were unable to recover it. 00:24:10.386 [2024-04-24 22:15:52.438024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.438171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.438197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.386 qpair failed and we were unable to recover it. 00:24:10.386 [2024-04-24 22:15:52.438491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.438684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.438712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.386 qpair failed and we were unable to recover it. 00:24:10.386 [2024-04-24 22:15:52.438870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.439029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.439056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.386 qpair failed and we were unable to recover it. 00:24:10.386 [2024-04-24 22:15:52.439212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.439376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.439413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.386 qpair failed and we were unable to recover it. 00:24:10.386 [2024-04-24 22:15:52.439614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.439754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.439781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.386 qpair failed and we were unable to recover it. 00:24:10.386 [2024-04-24 22:15:52.439958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.440133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.440160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.386 qpair failed and we were unable to recover it. 00:24:10.386 [2024-04-24 22:15:52.440366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.440510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.440538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.386 qpair failed and we were unable to recover it. 00:24:10.386 [2024-04-24 22:15:52.440700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.440842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.440870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.386 qpair failed and we were unable to recover it. 00:24:10.386 [2024-04-24 22:15:52.441050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.441241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.441268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.386 qpair failed and we were unable to recover it. 00:24:10.386 [2024-04-24 22:15:52.441448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.441658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.441685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.386 qpair failed and we were unable to recover it. 00:24:10.386 [2024-04-24 22:15:52.441845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.441977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.442004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.386 qpair failed and we were unable to recover it. 00:24:10.386 [2024-04-24 22:15:52.442178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.442353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.442380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.386 qpair failed and we were unable to recover it. 00:24:10.386 [2024-04-24 22:15:52.442568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.442768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.442796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.386 qpair failed and we were unable to recover it. 00:24:10.386 [2024-04-24 22:15:52.443006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.443214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.386 [2024-04-24 22:15:52.443241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.387 qpair failed and we were unable to recover it. 00:24:10.387 [2024-04-24 22:15:52.443428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.443622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.443649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.387 qpair failed and we were unable to recover it. 00:24:10.387 [2024-04-24 22:15:52.443857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.444054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.444081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.387 qpair failed and we were unable to recover it. 00:24:10.387 [2024-04-24 22:15:52.444261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.444455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.444483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.387 qpair failed and we were unable to recover it. 00:24:10.387 [2024-04-24 22:15:52.444659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.444871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.444898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.387 qpair failed and we were unable to recover it. 00:24:10.387 [2024-04-24 22:15:52.445087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.445289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.445317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.387 qpair failed and we were unable to recover it. 00:24:10.387 [2024-04-24 22:15:52.445511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.445686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.445713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.387 qpair failed and we were unable to recover it. 00:24:10.387 [2024-04-24 22:15:52.445840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.446013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.446040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.387 qpair failed and we were unable to recover it. 00:24:10.387 [2024-04-24 22:15:52.446216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.446372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.446406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.387 qpair failed and we were unable to recover it. 00:24:10.387 [2024-04-24 22:15:52.446595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.446801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.446828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.387 qpair failed and we were unable to recover it. 00:24:10.387 [2024-04-24 22:15:52.447082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.447252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.447279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.387 qpair failed and we were unable to recover it. 00:24:10.387 [2024-04-24 22:15:52.447462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.447640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.447668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.387 qpair failed and we were unable to recover it. 00:24:10.387 [2024-04-24 22:15:52.447873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.448075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.448103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.387 qpair failed and we were unable to recover it. 00:24:10.387 [2024-04-24 22:15:52.448300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.448497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.448526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.387 qpair failed and we were unable to recover it. 00:24:10.387 [2024-04-24 22:15:52.448726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.448892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.448919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.387 qpair failed and we were unable to recover it. 00:24:10.387 [2024-04-24 22:15:52.449117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.449286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.449313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.387 qpair failed and we were unable to recover it. 00:24:10.387 [2024-04-24 22:15:52.449473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.449637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.449664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.387 qpair failed and we were unable to recover it. 00:24:10.387 [2024-04-24 22:15:52.449858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.450066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.450093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.387 qpair failed and we were unable to recover it. 00:24:10.387 [2024-04-24 22:15:52.450263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.450486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.450515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.387 qpair failed and we were unable to recover it. 00:24:10.387 [2024-04-24 22:15:52.450701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.450855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.450883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.387 qpair failed and we were unable to recover it. 00:24:10.387 [2024-04-24 22:15:52.451089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.451267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.451294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.387 qpair failed and we were unable to recover it. 00:24:10.387 [2024-04-24 22:15:52.451458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.451639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.451666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.387 qpair failed and we were unable to recover it. 00:24:10.387 [2024-04-24 22:15:52.451852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.452013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.452039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.387 qpair failed and we were unable to recover it. 00:24:10.387 [2024-04-24 22:15:52.452166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.452335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.452362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.387 qpair failed and we were unable to recover it. 00:24:10.387 [2024-04-24 22:15:52.452572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.452742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.452770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.387 qpair failed and we were unable to recover it. 00:24:10.387 [2024-04-24 22:15:52.452974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.453154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.453181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.387 qpair failed and we were unable to recover it. 00:24:10.387 [2024-04-24 22:15:52.453300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.453498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.453526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.387 qpair failed and we were unable to recover it. 00:24:10.387 [2024-04-24 22:15:52.453723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.453900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.387 [2024-04-24 22:15:52.453927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.388 qpair failed and we were unable to recover it. 00:24:10.388 [2024-04-24 22:15:52.454092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.454254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.454282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.388 qpair failed and we were unable to recover it. 00:24:10.388 [2024-04-24 22:15:52.454410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.454593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.454621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.388 qpair failed and we were unable to recover it. 00:24:10.388 [2024-04-24 22:15:52.454798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.454953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.454980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.388 qpair failed and we were unable to recover it. 00:24:10.388 [2024-04-24 22:15:52.455189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.455360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.455387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.388 qpair failed and we were unable to recover it. 00:24:10.388 [2024-04-24 22:15:52.455565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.455750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.455777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.388 qpair failed and we were unable to recover it. 00:24:10.388 [2024-04-24 22:15:52.455904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.456062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.456092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.388 qpair failed and we were unable to recover it. 00:24:10.388 [2024-04-24 22:15:52.456260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.456458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.456485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.388 qpair failed and we were unable to recover it. 00:24:10.388 [2024-04-24 22:15:52.456668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.456842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.456869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.388 qpair failed and we were unable to recover it. 00:24:10.388 [2024-04-24 22:15:52.457056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.457251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.457278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.388 qpair failed and we were unable to recover it. 00:24:10.388 [2024-04-24 22:15:52.457449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.457623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.457651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.388 qpair failed and we were unable to recover it. 00:24:10.388 [2024-04-24 22:15:52.457846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.458048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.458076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.388 qpair failed and we were unable to recover it. 00:24:10.388 [2024-04-24 22:15:52.458246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.458443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.458471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.388 qpair failed and we were unable to recover it. 00:24:10.388 [2024-04-24 22:15:52.458610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.458813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.458840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.388 qpair failed and we were unable to recover it. 00:24:10.388 [2024-04-24 22:15:52.459019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.459226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.459253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.388 qpair failed and we were unable to recover it. 00:24:10.388 [2024-04-24 22:15:52.459416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.459605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.459633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.388 qpair failed and we were unable to recover it. 00:24:10.388 [2024-04-24 22:15:52.459859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.460069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.460095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.388 qpair failed and we were unable to recover it. 00:24:10.388 [2024-04-24 22:15:52.460230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.460436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.460469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.388 qpair failed and we were unable to recover it. 00:24:10.388 [2024-04-24 22:15:52.460690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.460880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.460907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.388 qpair failed and we were unable to recover it. 00:24:10.388 [2024-04-24 22:15:52.461109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.461315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.461342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.388 qpair failed and we were unable to recover it. 00:24:10.388 [2024-04-24 22:15:52.461506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.461693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.461721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.388 qpair failed and we were unable to recover it. 00:24:10.388 [2024-04-24 22:15:52.461908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.462089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.462117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.388 qpair failed and we were unable to recover it. 00:24:10.388 [2024-04-24 22:15:52.462285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.462423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.462451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.388 qpair failed and we were unable to recover it. 00:24:10.388 [2024-04-24 22:15:52.462586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.462781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.462808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.388 qpair failed and we were unable to recover it. 00:24:10.388 [2024-04-24 22:15:52.463004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.463199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.463226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.388 qpair failed and we were unable to recover it. 00:24:10.388 [2024-04-24 22:15:52.463430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.463626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.388 [2024-04-24 22:15:52.463653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.388 qpair failed and we were unable to recover it. 00:24:10.389 [2024-04-24 22:15:52.463830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.464004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.464031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.389 qpair failed and we were unable to recover it. 00:24:10.389 [2024-04-24 22:15:52.464227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.464403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.464431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.389 qpair failed and we were unable to recover it. 00:24:10.389 [2024-04-24 22:15:52.464632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.464839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.464866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.389 qpair failed and we were unable to recover it. 00:24:10.389 [2024-04-24 22:15:52.465120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.465329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.465356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.389 qpair failed and we were unable to recover it. 00:24:10.389 [2024-04-24 22:15:52.465579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.465752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.465779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.389 qpair failed and we were unable to recover it. 00:24:10.389 [2024-04-24 22:15:52.465950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.466082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.466109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.389 qpair failed and we were unable to recover it. 00:24:10.389 [2024-04-24 22:15:52.466316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.466506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.466534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.389 qpair failed and we were unable to recover it. 00:24:10.389 [2024-04-24 22:15:52.466705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.466873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.466900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.389 qpair failed and we were unable to recover it. 00:24:10.389 [2024-04-24 22:15:52.467039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.467242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.467269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.389 qpair failed and we were unable to recover it. 00:24:10.389 [2024-04-24 22:15:52.467495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.467657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.467685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.389 qpair failed and we were unable to recover it. 00:24:10.389 [2024-04-24 22:15:52.467865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.468030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.468057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.389 qpair failed and we were unable to recover it. 00:24:10.389 [2024-04-24 22:15:52.468253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.468451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.468479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.389 qpair failed and we were unable to recover it. 00:24:10.389 [2024-04-24 22:15:52.468652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.468796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.468823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.389 qpair failed and we were unable to recover it. 00:24:10.389 [2024-04-24 22:15:52.469031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.469202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.469229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.389 qpair failed and we were unable to recover it. 00:24:10.389 [2024-04-24 22:15:52.469407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.469606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.469633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.389 qpair failed and we were unable to recover it. 00:24:10.389 [2024-04-24 22:15:52.469823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.470032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.470060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.389 qpair failed and we were unable to recover it. 00:24:10.389 [2024-04-24 22:15:52.470236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.470418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.470446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.389 qpair failed and we were unable to recover it. 00:24:10.389 [2024-04-24 22:15:52.470650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.470848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.470875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.389 qpair failed and we were unable to recover it. 00:24:10.389 [2024-04-24 22:15:52.471049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.471258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.471285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.389 qpair failed and we were unable to recover it. 00:24:10.389 [2024-04-24 22:15:52.471464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.471630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.471657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.389 qpair failed and we were unable to recover it. 00:24:10.389 [2024-04-24 22:15:52.471827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.472042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.472069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.389 qpair failed and we were unable to recover it. 00:24:10.389 [2024-04-24 22:15:52.472241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.472450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.472479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.389 qpair failed and we were unable to recover it. 00:24:10.389 [2024-04-24 22:15:52.472687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.472893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.472920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.389 qpair failed and we were unable to recover it. 00:24:10.389 [2024-04-24 22:15:52.473127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.473292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.473319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.389 qpair failed and we were unable to recover it. 00:24:10.389 [2024-04-24 22:15:52.473536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.473718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.473746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.389 qpair failed and we were unable to recover it. 00:24:10.389 [2024-04-24 22:15:52.473917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.474116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.474148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.389 qpair failed and we were unable to recover it. 00:24:10.389 [2024-04-24 22:15:52.474331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.474525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.474554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.389 qpair failed and we were unable to recover it. 00:24:10.389 [2024-04-24 22:15:52.474756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.475003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.475030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.389 qpair failed and we were unable to recover it. 00:24:10.389 [2024-04-24 22:15:52.475212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.389 [2024-04-24 22:15:52.475428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.475456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.390 qpair failed and we were unable to recover it. 00:24:10.390 [2024-04-24 22:15:52.475668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.475857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.475884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.390 qpair failed and we were unable to recover it. 00:24:10.390 [2024-04-24 22:15:52.476104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.476272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.476299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.390 qpair failed and we were unable to recover it. 00:24:10.390 [2024-04-24 22:15:52.476493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.476696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.476723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.390 qpair failed and we were unable to recover it. 00:24:10.390 [2024-04-24 22:15:52.476892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.477071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.477098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.390 qpair failed and we were unable to recover it. 00:24:10.390 [2024-04-24 22:15:52.477274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.477470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.477498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.390 qpair failed and we were unable to recover it. 00:24:10.390 [2024-04-24 22:15:52.477698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.477868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.477896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.390 qpair failed and we were unable to recover it. 00:24:10.390 [2024-04-24 22:15:52.478068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.478270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.478302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.390 qpair failed and we were unable to recover it. 00:24:10.390 [2024-04-24 22:15:52.478525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.478737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.478765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.390 qpair failed and we were unable to recover it. 00:24:10.390 [2024-04-24 22:15:52.478958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.479167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.479194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.390 qpair failed and we were unable to recover it. 00:24:10.390 [2024-04-24 22:15:52.479438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.479614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.479641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.390 qpair failed and we were unable to recover it. 00:24:10.390 [2024-04-24 22:15:52.479815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.480015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.480042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.390 qpair failed and we were unable to recover it. 00:24:10.390 [2024-04-24 22:15:52.480226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.480424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.480452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.390 qpair failed and we were unable to recover it. 00:24:10.390 [2024-04-24 22:15:52.480640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.480821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.480848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.390 qpair failed and we were unable to recover it. 00:24:10.390 [2024-04-24 22:15:52.481026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.481193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.481220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.390 qpair failed and we were unable to recover it. 00:24:10.390 [2024-04-24 22:15:52.481400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.481574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.481601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.390 qpair failed and we were unable to recover it. 00:24:10.390 [2024-04-24 22:15:52.481808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.481931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.481959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.390 qpair failed and we were unable to recover it. 00:24:10.390 [2024-04-24 22:15:52.482146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.482347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.482379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.390 qpair failed and we were unable to recover it. 00:24:10.390 [2024-04-24 22:15:52.482513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.482632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.482659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.390 qpair failed and we were unable to recover it. 00:24:10.390 [2024-04-24 22:15:52.482863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.483041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.483068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.390 qpair failed and we were unable to recover it. 00:24:10.390 [2024-04-24 22:15:52.483233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.483362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.483389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.390 qpair failed and we were unable to recover it. 00:24:10.390 [2024-04-24 22:15:52.483562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.483735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.483763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.390 qpair failed and we were unable to recover it. 00:24:10.390 [2024-04-24 22:15:52.483899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.484089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.484116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.390 qpair failed and we were unable to recover it. 00:24:10.390 [2024-04-24 22:15:52.484322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.484450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.484478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.390 qpair failed and we were unable to recover it. 00:24:10.390 [2024-04-24 22:15:52.484706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.484862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.484890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.390 qpair failed and we were unable to recover it. 00:24:10.390 [2024-04-24 22:15:52.485106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.485231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.485258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.390 qpair failed and we were unable to recover it. 00:24:10.390 [2024-04-24 22:15:52.485435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.485607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.485634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.390 qpair failed and we were unable to recover it. 00:24:10.390 [2024-04-24 22:15:52.485809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.485979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.486006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.390 qpair failed and we were unable to recover it. 00:24:10.390 [2024-04-24 22:15:52.486174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.486367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.390 [2024-04-24 22:15:52.486402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.390 qpair failed and we were unable to recover it. 00:24:10.390 [2024-04-24 22:15:52.486540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.486742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.486769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.391 qpair failed and we were unable to recover it. 00:24:10.391 [2024-04-24 22:15:52.486968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.487139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.487166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.391 qpair failed and we were unable to recover it. 00:24:10.391 [2024-04-24 22:15:52.487391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.487544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.487572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.391 qpair failed and we were unable to recover it. 00:24:10.391 [2024-04-24 22:15:52.487753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.487962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.487989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.391 qpair failed and we were unable to recover it. 00:24:10.391 [2024-04-24 22:15:52.488186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.488401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.488429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.391 qpair failed and we were unable to recover it. 00:24:10.391 [2024-04-24 22:15:52.488576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.488780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.488808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.391 qpair failed and we were unable to recover it. 00:24:10.391 [2024-04-24 22:15:52.488988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.489118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.489145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.391 qpair failed and we were unable to recover it. 00:24:10.391 [2024-04-24 22:15:52.489345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.489551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.489579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.391 qpair failed and we were unable to recover it. 00:24:10.391 [2024-04-24 22:15:52.489789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.489983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.490011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.391 qpair failed and we were unable to recover it. 00:24:10.391 [2024-04-24 22:15:52.490214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.490420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.490449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.391 qpair failed and we were unable to recover it. 00:24:10.391 [2024-04-24 22:15:52.490634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.490810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.490838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.391 qpair failed and we were unable to recover it. 00:24:10.391 [2024-04-24 22:15:52.491015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.491211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.491238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.391 qpair failed and we were unable to recover it. 00:24:10.391 [2024-04-24 22:15:52.491436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.491607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.491634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.391 qpair failed and we were unable to recover it. 00:24:10.391 [2024-04-24 22:15:52.491841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.492041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.492068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.391 qpair failed and we were unable to recover it. 00:24:10.391 [2024-04-24 22:15:52.492245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.492419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.492447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.391 qpair failed and we were unable to recover it. 00:24:10.391 [2024-04-24 22:15:52.492647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.492816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.492844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.391 qpair failed and we were unable to recover it. 00:24:10.391 [2024-04-24 22:15:52.493012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.493208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.493235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.391 qpair failed and we were unable to recover it. 00:24:10.391 [2024-04-24 22:15:52.493422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.493606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.493633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.391 qpair failed and we were unable to recover it. 00:24:10.391 [2024-04-24 22:15:52.493785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.493951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.493978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.391 qpair failed and we were unable to recover it. 00:24:10.391 [2024-04-24 22:15:52.494196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.494323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.494350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.391 qpair failed and we were unable to recover it. 00:24:10.391 [2024-04-24 22:15:52.494499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.494682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.494710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.391 qpair failed and we were unable to recover it. 00:24:10.391 [2024-04-24 22:15:52.494870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.495077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.495105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.391 qpair failed and we were unable to recover it. 00:24:10.391 [2024-04-24 22:15:52.495291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.495446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.495474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.391 qpair failed and we were unable to recover it. 00:24:10.391 [2024-04-24 22:15:52.495669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.495874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.495902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.391 qpair failed and we were unable to recover it. 00:24:10.391 [2024-04-24 22:15:52.496035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.496199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.496226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.391 qpair failed and we were unable to recover it. 00:24:10.391 [2024-04-24 22:15:52.496414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.496579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.496607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.391 qpair failed and we were unable to recover it. 00:24:10.391 [2024-04-24 22:15:52.496768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.496979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.497006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.391 qpair failed and we were unable to recover it. 00:24:10.391 [2024-04-24 22:15:52.497170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.497373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.497414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.391 qpair failed and we were unable to recover it. 00:24:10.391 [2024-04-24 22:15:52.497590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.497790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.391 [2024-04-24 22:15:52.497818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.391 qpair failed and we were unable to recover it. 00:24:10.392 [2024-04-24 22:15:52.498030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.498237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.498264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.392 qpair failed and we were unable to recover it. 00:24:10.392 [2024-04-24 22:15:52.498468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.498688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.498715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.392 qpair failed and we were unable to recover it. 00:24:10.392 [2024-04-24 22:15:52.498873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.499026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.499053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.392 qpair failed and we were unable to recover it. 00:24:10.392 [2024-04-24 22:15:52.499270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.499433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.499462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.392 qpair failed and we were unable to recover it. 00:24:10.392 [2024-04-24 22:15:52.499700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.499895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.499922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.392 qpair failed and we were unable to recover it. 00:24:10.392 [2024-04-24 22:15:52.500188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.500325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.500352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.392 qpair failed and we were unable to recover it. 00:24:10.392 [2024-04-24 22:15:52.500534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.500682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.500709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.392 qpair failed and we were unable to recover it. 00:24:10.392 [2024-04-24 22:15:52.500837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.501008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.501035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.392 qpair failed and we were unable to recover it. 00:24:10.392 [2024-04-24 22:15:52.501245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.501426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.501454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.392 qpair failed and we were unable to recover it. 00:24:10.392 [2024-04-24 22:15:52.501593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.501722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.501749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.392 qpair failed and we were unable to recover it. 00:24:10.392 [2024-04-24 22:15:52.501934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.502100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.502127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.392 qpair failed and we were unable to recover it. 00:24:10.392 [2024-04-24 22:15:52.502294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.502452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.502480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.392 qpair failed and we were unable to recover it. 00:24:10.392 [2024-04-24 22:15:52.502680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.502878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.502905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.392 qpair failed and we were unable to recover it. 00:24:10.392 [2024-04-24 22:15:52.503072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.503269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.503296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.392 qpair failed and we were unable to recover it. 00:24:10.392 [2024-04-24 22:15:52.503500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.503698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.503725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.392 qpair failed and we were unable to recover it. 00:24:10.392 [2024-04-24 22:15:52.503897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.504057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.504084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.392 qpair failed and we were unable to recover it. 00:24:10.392 [2024-04-24 22:15:52.504240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.504425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.504453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.392 qpair failed and we were unable to recover it. 00:24:10.392 [2024-04-24 22:15:52.504640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.504810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.504837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.392 qpair failed and we were unable to recover it. 00:24:10.392 [2024-04-24 22:15:52.505009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.505167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.505194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.392 qpair failed and we were unable to recover it. 00:24:10.392 [2024-04-24 22:15:52.505341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.505501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.505529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.392 qpair failed and we were unable to recover it. 00:24:10.392 [2024-04-24 22:15:52.505738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.505868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.505895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.392 qpair failed and we were unable to recover it. 00:24:10.392 [2024-04-24 22:15:52.506069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.506213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.506240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.392 qpair failed and we were unable to recover it. 00:24:10.392 [2024-04-24 22:15:52.506437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.506609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.392 [2024-04-24 22:15:52.506636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.392 qpair failed and we were unable to recover it. 00:24:10.393 [2024-04-24 22:15:52.506836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.507034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.507062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.393 qpair failed and we were unable to recover it. 00:24:10.393 [2024-04-24 22:15:52.507309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.507478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.507506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.393 qpair failed and we were unable to recover it. 00:24:10.393 [2024-04-24 22:15:52.507701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.507870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.507898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.393 qpair failed and we were unable to recover it. 00:24:10.393 [2024-04-24 22:15:52.508080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.508289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.508316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.393 qpair failed and we were unable to recover it. 00:24:10.393 [2024-04-24 22:15:52.508505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.508672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.508699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.393 qpair failed and we were unable to recover it. 00:24:10.393 [2024-04-24 22:15:52.508875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.509052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.509079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.393 qpair failed and we were unable to recover it. 00:24:10.393 [2024-04-24 22:15:52.509274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.509441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.509469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.393 qpair failed and we were unable to recover it. 00:24:10.393 [2024-04-24 22:15:52.509653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.509829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.509857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.393 qpair failed and we were unable to recover it. 00:24:10.393 [2024-04-24 22:15:52.510067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.510273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.510307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.393 qpair failed and we were unable to recover it. 00:24:10.393 [2024-04-24 22:15:52.510553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.510732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.510759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.393 qpair failed and we were unable to recover it. 00:24:10.393 [2024-04-24 22:15:52.510946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.511162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.511189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.393 qpair failed and we were unable to recover it. 00:24:10.393 [2024-04-24 22:15:52.511407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.511569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.511596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.393 qpair failed and we were unable to recover it. 00:24:10.393 [2024-04-24 22:15:52.511810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.511997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.512024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.393 qpair failed and we were unable to recover it. 00:24:10.393 [2024-04-24 22:15:52.512246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.512450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.512479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.393 qpair failed and we were unable to recover it. 00:24:10.393 [2024-04-24 22:15:52.512646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.512846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.512873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.393 qpair failed and we were unable to recover it. 00:24:10.393 [2024-04-24 22:15:52.513073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.513241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.513268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.393 qpair failed and we were unable to recover it. 00:24:10.393 [2024-04-24 22:15:52.513473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.513601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.513629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.393 qpair failed and we were unable to recover it. 00:24:10.393 [2024-04-24 22:15:52.513833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.514004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.514031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.393 qpair failed and we were unable to recover it. 00:24:10.393 [2024-04-24 22:15:52.514193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.514352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.514379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.393 qpair failed and we were unable to recover it. 00:24:10.393 [2024-04-24 22:15:52.514592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.514748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.514776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.393 qpair failed and we were unable to recover it. 00:24:10.393 [2024-04-24 22:15:52.514961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.515175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.515203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.393 qpair failed and we were unable to recover it. 00:24:10.393 [2024-04-24 22:15:52.515411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.515590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.515618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.393 qpair failed and we were unable to recover it. 00:24:10.393 [2024-04-24 22:15:52.515787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.515979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.516006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.393 qpair failed and we were unable to recover it. 00:24:10.393 [2024-04-24 22:15:52.516273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.516439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.516467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.393 qpair failed and we were unable to recover it. 00:24:10.393 [2024-04-24 22:15:52.516637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.516797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.516829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.393 qpair failed and we were unable to recover it. 00:24:10.393 [2024-04-24 22:15:52.517040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.517202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.517230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.393 qpair failed and we were unable to recover it. 00:24:10.393 [2024-04-24 22:15:52.517441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.517612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.517640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.393 qpair failed and we were unable to recover it. 00:24:10.393 [2024-04-24 22:15:52.517826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.518049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.518077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.393 qpair failed and we were unable to recover it. 00:24:10.393 [2024-04-24 22:15:52.518267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.518466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.393 [2024-04-24 22:15:52.518494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.393 qpair failed and we were unable to recover it. 00:24:10.394 [2024-04-24 22:15:52.518710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.518917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.518944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.394 qpair failed and we were unable to recover it. 00:24:10.394 [2024-04-24 22:15:52.519142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.519318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.519345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.394 qpair failed and we were unable to recover it. 00:24:10.394 [2024-04-24 22:15:52.519559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.519749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.519777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.394 qpair failed and we were unable to recover it. 00:24:10.394 [2024-04-24 22:15:52.519966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.520150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.520178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.394 qpair failed and we were unable to recover it. 00:24:10.394 [2024-04-24 22:15:52.520376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.520551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.520579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.394 qpair failed and we were unable to recover it. 00:24:10.394 [2024-04-24 22:15:52.520752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.520918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.520945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.394 qpair failed and we were unable to recover it. 00:24:10.394 [2024-04-24 22:15:52.521116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.521251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.521278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.394 qpair failed and we were unable to recover it. 00:24:10.394 [2024-04-24 22:15:52.521473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.521622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.521649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.394 qpair failed and we were unable to recover it. 00:24:10.394 [2024-04-24 22:15:52.521785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.521955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.521982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.394 qpair failed and we were unable to recover it. 00:24:10.394 [2024-04-24 22:15:52.522153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.522344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.522372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.394 qpair failed and we were unable to recover it. 00:24:10.394 [2024-04-24 22:15:52.522561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.522729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.522756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.394 qpair failed and we were unable to recover it. 00:24:10.394 [2024-04-24 22:15:52.522951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.523130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.523157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.394 qpair failed and we were unable to recover it. 00:24:10.394 [2024-04-24 22:15:52.523361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.523574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.523602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.394 qpair failed and we were unable to recover it. 00:24:10.394 [2024-04-24 22:15:52.523771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.523919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.523947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.394 qpair failed and we were unable to recover it. 00:24:10.394 [2024-04-24 22:15:52.524141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.524311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.524338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.394 qpair failed and we were unable to recover it. 00:24:10.394 [2024-04-24 22:15:52.524516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.524693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.524720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.394 qpair failed and we were unable to recover it. 00:24:10.394 [2024-04-24 22:15:52.524921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.525093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.525121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.394 qpair failed and we were unable to recover it. 00:24:10.394 [2024-04-24 22:15:52.525279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.525447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.525476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.394 qpair failed and we were unable to recover it. 00:24:10.394 [2024-04-24 22:15:52.525655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.525841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.525868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.394 qpair failed and we were unable to recover it. 00:24:10.394 [2024-04-24 22:15:52.526067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.526204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.526231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.394 qpair failed and we were unable to recover it. 00:24:10.394 [2024-04-24 22:15:52.526408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.526579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.526606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.394 qpair failed and we were unable to recover it. 00:24:10.394 [2024-04-24 22:15:52.526777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.526928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.526956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.394 qpair failed and we were unable to recover it. 00:24:10.394 [2024-04-24 22:15:52.527145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.527315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.527342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.394 qpair failed and we were unable to recover it. 00:24:10.394 [2024-04-24 22:15:52.527518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.527703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.527730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.394 qpair failed and we were unable to recover it. 00:24:10.394 [2024-04-24 22:15:52.527881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.528089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.528116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.394 qpair failed and we were unable to recover it. 00:24:10.394 [2024-04-24 22:15:52.528325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.528522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.528550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.394 qpair failed and we were unable to recover it. 00:24:10.394 [2024-04-24 22:15:52.528718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.528895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.528922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.394 qpair failed and we were unable to recover it. 00:24:10.394 [2024-04-24 22:15:52.529106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.529290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.529317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.394 qpair failed and we were unable to recover it. 00:24:10.394 [2024-04-24 22:15:52.529531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.394 [2024-04-24 22:15:52.529678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.529706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.395 qpair failed and we were unable to recover it. 00:24:10.395 [2024-04-24 22:15:52.529902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.530097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.530124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.395 qpair failed and we were unable to recover it. 00:24:10.395 [2024-04-24 22:15:52.530328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.530489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.530517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.395 qpair failed and we were unable to recover it. 00:24:10.395 [2024-04-24 22:15:52.530653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.530834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.530861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.395 qpair failed and we were unable to recover it. 00:24:10.395 [2024-04-24 22:15:52.531031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.531240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.531267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.395 qpair failed and we were unable to recover it. 00:24:10.395 [2024-04-24 22:15:52.531441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.531649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.531676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.395 qpair failed and we were unable to recover it. 00:24:10.395 [2024-04-24 22:15:52.531841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.532032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.532059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.395 qpair failed and we were unable to recover it. 00:24:10.395 [2024-04-24 22:15:52.532227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.532413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.532441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.395 qpair failed and we were unable to recover it. 00:24:10.395 [2024-04-24 22:15:52.532647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.532844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.532871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.395 qpair failed and we were unable to recover it. 00:24:10.395 [2024-04-24 22:15:52.533072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.533232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.533259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.395 qpair failed and we were unable to recover it. 00:24:10.395 [2024-04-24 22:15:52.533409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.533625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.533658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.395 qpair failed and we were unable to recover it. 00:24:10.395 [2024-04-24 22:15:52.533848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.534023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.534050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.395 qpair failed and we were unable to recover it. 00:24:10.395 [2024-04-24 22:15:52.534222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.534431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.534459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.395 qpair failed and we were unable to recover it. 00:24:10.395 [2024-04-24 22:15:52.534624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.534843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.534870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.395 qpair failed and we were unable to recover it. 00:24:10.395 [2024-04-24 22:15:52.535070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.535244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.535271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.395 qpair failed and we were unable to recover it. 00:24:10.395 [2024-04-24 22:15:52.535474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.535646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.535673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.395 qpair failed and we were unable to recover it. 00:24:10.395 [2024-04-24 22:15:52.535840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.536038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.536065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.395 qpair failed and we were unable to recover it. 00:24:10.395 [2024-04-24 22:15:52.536277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.536446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.536475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.395 qpair failed and we were unable to recover it. 00:24:10.395 [2024-04-24 22:15:52.536646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.536842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.536869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.395 qpair failed and we were unable to recover it. 00:24:10.395 [2024-04-24 22:15:52.537070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.537231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.537258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.395 qpair failed and we were unable to recover it. 00:24:10.395 [2024-04-24 22:15:52.537427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.537628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.537660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.395 qpair failed and we were unable to recover it. 00:24:10.395 [2024-04-24 22:15:52.537858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.538027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.538054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.395 qpair failed and we were unable to recover it. 00:24:10.395 [2024-04-24 22:15:52.538261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.538438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.538467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.395 qpair failed and we were unable to recover it. 00:24:10.395 [2024-04-24 22:15:52.538674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.538871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.538898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.395 qpair failed and we were unable to recover it. 00:24:10.395 [2024-04-24 22:15:52.539113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.539289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.539316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.395 qpair failed and we were unable to recover it. 00:24:10.395 [2024-04-24 22:15:52.539490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.539666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.539693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.395 qpair failed and we were unable to recover it. 00:24:10.395 [2024-04-24 22:15:52.539902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.540088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.540115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.395 qpair failed and we were unable to recover it. 00:24:10.395 [2024-04-24 22:15:52.540289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.540490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.540518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.395 qpair failed and we were unable to recover it. 00:24:10.395 [2024-04-24 22:15:52.540725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.540966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.540993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.395 qpair failed and we were unable to recover it. 00:24:10.395 [2024-04-24 22:15:52.541206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.395 [2024-04-24 22:15:52.541446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.541474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.396 qpair failed and we were unable to recover it. 00:24:10.396 [2024-04-24 22:15:52.541668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.541801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.541834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.396 qpair failed and we were unable to recover it. 00:24:10.396 [2024-04-24 22:15:52.542003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.542196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.542223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.396 qpair failed and we were unable to recover it. 00:24:10.396 [2024-04-24 22:15:52.542429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.542637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.542664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.396 qpair failed and we were unable to recover it. 00:24:10.396 [2024-04-24 22:15:52.542863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.543066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.543093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.396 qpair failed and we were unable to recover it. 00:24:10.396 [2024-04-24 22:15:52.543345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.543538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.543566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.396 qpair failed and we were unable to recover it. 00:24:10.396 [2024-04-24 22:15:52.543779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.543971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.543997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.396 qpair failed and we were unable to recover it. 00:24:10.396 [2024-04-24 22:15:52.544210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.544404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.544432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.396 qpair failed and we were unable to recover it. 00:24:10.396 [2024-04-24 22:15:52.544634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.544802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.544829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.396 qpair failed and we were unable to recover it. 00:24:10.396 [2024-04-24 22:15:52.545014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.545211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.545239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.396 qpair failed and we were unable to recover it. 00:24:10.396 [2024-04-24 22:15:52.545430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.545592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.545619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.396 qpair failed and we were unable to recover it. 00:24:10.396 [2024-04-24 22:15:52.545818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.546020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.546052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.396 qpair failed and we were unable to recover it. 00:24:10.396 [2024-04-24 22:15:52.546273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.546410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.546439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.396 qpair failed and we were unable to recover it. 00:24:10.396 [2024-04-24 22:15:52.546609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.546782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.546809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.396 qpair failed and we were unable to recover it. 00:24:10.396 [2024-04-24 22:15:52.546938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.547179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.547206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.396 qpair failed and we were unable to recover it. 00:24:10.396 [2024-04-24 22:15:52.547443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.547614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.547641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.396 qpair failed and we were unable to recover it. 00:24:10.396 [2024-04-24 22:15:52.547842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.548042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.548069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.396 qpair failed and we were unable to recover it. 00:24:10.396 [2024-04-24 22:15:52.548255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.548462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.548491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.396 qpair failed and we were unable to recover it. 00:24:10.396 [2024-04-24 22:15:52.548753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.548914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.548941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.396 qpair failed and we were unable to recover it. 00:24:10.396 [2024-04-24 22:15:52.549072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.549202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.549229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.396 qpair failed and we were unable to recover it. 00:24:10.396 [2024-04-24 22:15:52.549426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.549634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.549661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.396 qpair failed and we were unable to recover it. 00:24:10.396 [2024-04-24 22:15:52.549824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.550036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.550063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.396 qpair failed and we were unable to recover it. 00:24:10.396 [2024-04-24 22:15:52.550322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.550507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.550535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.396 qpair failed and we were unable to recover it. 00:24:10.396 [2024-04-24 22:15:52.550727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.550933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.550960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.396 qpair failed and we were unable to recover it. 00:24:10.396 [2024-04-24 22:15:52.551180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.551322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.551349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.396 qpair failed and we were unable to recover it. 00:24:10.396 [2024-04-24 22:15:52.551527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.551724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.551752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.396 qpair failed and we were unable to recover it. 00:24:10.396 [2024-04-24 22:15:52.551936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.552132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.552159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.396 qpair failed and we were unable to recover it. 00:24:10.396 [2024-04-24 22:15:52.552353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.552503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.552532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.396 qpair failed and we were unable to recover it. 00:24:10.396 [2024-04-24 22:15:52.552734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.552927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.396 [2024-04-24 22:15:52.552954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.396 qpair failed and we were unable to recover it. 00:24:10.396 [2024-04-24 22:15:52.553203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.553368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.553407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.397 qpair failed and we were unable to recover it. 00:24:10.397 [2024-04-24 22:15:52.553572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.553773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.553800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.397 qpair failed and we were unable to recover it. 00:24:10.397 [2024-04-24 22:15:52.553971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.554164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.554191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.397 qpair failed and we were unable to recover it. 00:24:10.397 [2024-04-24 22:15:52.554436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.554642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.554670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.397 qpair failed and we were unable to recover it. 00:24:10.397 [2024-04-24 22:15:52.554848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.555023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.555050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.397 qpair failed and we were unable to recover it. 00:24:10.397 [2024-04-24 22:15:52.555255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.555416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.555445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.397 qpair failed and we were unable to recover it. 00:24:10.397 [2024-04-24 22:15:52.555657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.555821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.555848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.397 qpair failed and we were unable to recover it. 00:24:10.397 [2024-04-24 22:15:52.556064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.556221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.556248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.397 qpair failed and we were unable to recover it. 00:24:10.397 [2024-04-24 22:15:52.556480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.556687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.556714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.397 qpair failed and we were unable to recover it. 00:24:10.397 [2024-04-24 22:15:52.556842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.557011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.557038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.397 qpair failed and we were unable to recover it. 00:24:10.397 [2024-04-24 22:15:52.557221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.557388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.557423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.397 qpair failed and we were unable to recover it. 00:24:10.397 [2024-04-24 22:15:52.557640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.557829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.557856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.397 qpair failed and we were unable to recover it. 00:24:10.397 [2024-04-24 22:15:52.558040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.558211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.558238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.397 qpair failed and we were unable to recover it. 00:24:10.397 [2024-04-24 22:15:52.558434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.558636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.558663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.397 qpair failed and we were unable to recover it. 00:24:10.397 [2024-04-24 22:15:52.558878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.559048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.559075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.397 qpair failed and we were unable to recover it. 00:24:10.397 [2024-04-24 22:15:52.559271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.559476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.559504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.397 qpair failed and we were unable to recover it. 00:24:10.397 [2024-04-24 22:15:52.559700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.559882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.559909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.397 qpair failed and we were unable to recover it. 00:24:10.397 [2024-04-24 22:15:52.560117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.560261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.560288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.397 qpair failed and we were unable to recover it. 00:24:10.397 [2024-04-24 22:15:52.560481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.560623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.560650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.397 qpair failed and we were unable to recover it. 00:24:10.397 [2024-04-24 22:15:52.560822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.561018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.561045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.397 qpair failed and we were unable to recover it. 00:24:10.397 [2024-04-24 22:15:52.561255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.561465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.561494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.397 qpair failed and we were unable to recover it. 00:24:10.397 [2024-04-24 22:15:52.561766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.561920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.561947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.397 qpair failed and we were unable to recover it. 00:24:10.397 [2024-04-24 22:15:52.562108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.562237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.562264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.397 qpair failed and we were unable to recover it. 00:24:10.397 [2024-04-24 22:15:52.562443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.562605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.397 [2024-04-24 22:15:52.562632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.397 qpair failed and we were unable to recover it. 00:24:10.398 [2024-04-24 22:15:52.562791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.562949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.562976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.398 qpair failed and we were unable to recover it. 00:24:10.398 [2024-04-24 22:15:52.563153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.563323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.563350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.398 qpair failed and we were unable to recover it. 00:24:10.398 [2024-04-24 22:15:52.563486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.563686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.563713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.398 qpair failed and we were unable to recover it. 00:24:10.398 [2024-04-24 22:15:52.563959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.564118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.564145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.398 qpair failed and we were unable to recover it. 00:24:10.398 [2024-04-24 22:15:52.564344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.564582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.564611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.398 qpair failed and we were unable to recover it. 00:24:10.398 [2024-04-24 22:15:52.564841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.565021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.565048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.398 qpair failed and we were unable to recover it. 00:24:10.398 [2024-04-24 22:15:52.565254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.565430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.565459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.398 qpair failed and we were unable to recover it. 00:24:10.398 [2024-04-24 22:15:52.565669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.565796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.565823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.398 qpair failed and we were unable to recover it. 00:24:10.398 [2024-04-24 22:15:52.565985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.566181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.566208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.398 qpair failed and we were unable to recover it. 00:24:10.398 [2024-04-24 22:15:52.566417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.566616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.566643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.398 qpair failed and we were unable to recover it. 00:24:10.398 [2024-04-24 22:15:52.566899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.567096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.567123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.398 qpair failed and we were unable to recover it. 00:24:10.398 [2024-04-24 22:15:52.567332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.567498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.567527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.398 qpair failed and we were unable to recover it. 00:24:10.398 [2024-04-24 22:15:52.567750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.567899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.567926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.398 qpair failed and we were unable to recover it. 00:24:10.398 [2024-04-24 22:15:52.568114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.568325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.568352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.398 qpair failed and we were unable to recover it. 00:24:10.398 [2024-04-24 22:15:52.568491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.568669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.568697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.398 qpair failed and we were unable to recover it. 00:24:10.398 [2024-04-24 22:15:52.568909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.569071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.569099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.398 qpair failed and we were unable to recover it. 00:24:10.398 [2024-04-24 22:15:52.569297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.569486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.569515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.398 qpair failed and we were unable to recover it. 00:24:10.398 [2024-04-24 22:15:52.569672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.569861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.569890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.398 qpair failed and we were unable to recover it. 00:24:10.398 [2024-04-24 22:15:52.570086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.570214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.570241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.398 qpair failed and we were unable to recover it. 00:24:10.398 [2024-04-24 22:15:52.570490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.570664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.570691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.398 qpair failed and we were unable to recover it. 00:24:10.398 [2024-04-24 22:15:52.570927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.571163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.571191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.398 qpair failed and we were unable to recover it. 00:24:10.398 [2024-04-24 22:15:52.571391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.571606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.571634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.398 qpair failed and we were unable to recover it. 00:24:10.398 [2024-04-24 22:15:52.571825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.572002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.572029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.398 qpair failed and we were unable to recover it. 00:24:10.398 [2024-04-24 22:15:52.572222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.572419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.572447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.398 qpair failed and we were unable to recover it. 00:24:10.398 [2024-04-24 22:15:52.572646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.572843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.572870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.398 qpair failed and we were unable to recover it. 00:24:10.398 [2024-04-24 22:15:52.573080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.573224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.573251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.398 qpair failed and we were unable to recover it. 00:24:10.398 [2024-04-24 22:15:52.573431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.573599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.573627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.398 qpair failed and we were unable to recover it. 00:24:10.398 [2024-04-24 22:15:52.573801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.574009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.398 [2024-04-24 22:15:52.574036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.398 qpair failed and we were unable to recover it. 00:24:10.399 [2024-04-24 22:15:52.574173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.574359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.574387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.399 qpair failed and we were unable to recover it. 00:24:10.399 [2024-04-24 22:15:52.574605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.574759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.574787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.399 qpair failed and we were unable to recover it. 00:24:10.399 [2024-04-24 22:15:52.574953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 4039140 Killed "${NVMF_APP[@]}" "$@" 00:24:10.399 [2024-04-24 22:15:52.575148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.575178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.399 qpair failed and we were unable to recover it. 00:24:10.399 [2024-04-24 22:15:52.575340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.575550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.575579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.399 qpair failed and we were unable to recover it. 00:24:10.399 22:15:52 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:24:10.399 [2024-04-24 22:15:52.575743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 22:15:52 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:10.399 [2024-04-24 22:15:52.575906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.575934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.399 qpair failed and we were unable to recover it. 00:24:10.399 22:15:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:10.399 22:15:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:10.399 [2024-04-24 22:15:52.576177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 22:15:52 -- common/autotest_common.sh@10 -- # set +x 00:24:10.399 [2024-04-24 22:15:52.576388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.576423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.399 qpair failed and we were unable to recover it. 00:24:10.399 [2024-04-24 22:15:52.576629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.576888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.576915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.399 qpair failed and we were unable to recover it. 00:24:10.399 [2024-04-24 22:15:52.577111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.577256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.577284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.399 qpair failed and we were unable to recover it. 00:24:10.399 [2024-04-24 22:15:52.577454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.577627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.577655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.399 qpair failed and we were unable to recover it. 00:24:10.399 [2024-04-24 22:15:52.577832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.578026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.578053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.399 qpair failed and we were unable to recover it. 00:24:10.399 [2024-04-24 22:15:52.578298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.578498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.578527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.399 qpair failed and we were unable to recover it. 00:24:10.399 [2024-04-24 22:15:52.578667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.578841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.578868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.399 qpair failed and we were unable to recover it. 00:24:10.399 [2024-04-24 22:15:52.579075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.579235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.579262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.399 qpair failed and we were unable to recover it. 00:24:10.399 [2024-04-24 22:15:52.579410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.579576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.579604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.399 qpair failed and we were unable to recover it. 00:24:10.399 [2024-04-24 22:15:52.579785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.579998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.580025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.399 qpair failed and we were unable to recover it. 00:24:10.399 22:15:52 -- nvmf/common.sh@470 -- # nvmfpid=4039690 00:24:10.399 22:15:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:10.399 [2024-04-24 22:15:52.580193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 22:15:52 -- nvmf/common.sh@471 -- # waitforlisten 4039690 00:24:10.399 [2024-04-24 22:15:52.580380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.580417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.399 qpair failed and we were unable to recover it. 00:24:10.399 22:15:52 -- common/autotest_common.sh@817 -- # '[' -z 4039690 ']' 00:24:10.399 [2024-04-24 22:15:52.580612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 22:15:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.399 [2024-04-24 22:15:52.580792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 22:15:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:10.399 [2024-04-24 22:15:52.580821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.399 qpair failed and we were unable to recover it. 00:24:10.399 [2024-04-24 22:15:52.580964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 22:15:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.399 22:15:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:10.399 [2024-04-24 22:15:52.581137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.581166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.399 qpair failed and we were unable to recover it. 00:24:10.399 22:15:52 -- common/autotest_common.sh@10 -- # set +x 00:24:10.399 [2024-04-24 22:15:52.581351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.581542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.581571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.399 qpair failed and we were unable to recover it. 00:24:10.399 [2024-04-24 22:15:52.581732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.581984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.582012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.399 qpair failed and we were unable to recover it. 00:24:10.399 [2024-04-24 22:15:52.582222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.582433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.582460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.399 qpair failed and we were unable to recover it. 00:24:10.399 [2024-04-24 22:15:52.582663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.582845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.582873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.399 qpair failed and we were unable to recover it. 00:24:10.399 [2024-04-24 22:15:52.583036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.583223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.583250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.399 qpair failed and we were unable to recover it. 00:24:10.399 [2024-04-24 22:15:52.583460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.583640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.583668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.399 qpair failed and we were unable to recover it. 00:24:10.399 [2024-04-24 22:15:52.583811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.584004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.399 [2024-04-24 22:15:52.584031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.399 qpair failed and we were unable to recover it. 00:24:10.399 [2024-04-24 22:15:52.584241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.584459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.584487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.400 qpair failed and we were unable to recover it. 00:24:10.400 [2024-04-24 22:15:52.584651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.584806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.584834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.400 qpair failed and we were unable to recover it. 00:24:10.400 [2024-04-24 22:15:52.585054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.585179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.585207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.400 qpair failed and we were unable to recover it. 00:24:10.400 [2024-04-24 22:15:52.585381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.585543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.585571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.400 qpair failed and we were unable to recover it. 00:24:10.400 [2024-04-24 22:15:52.585736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.585875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.585902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.400 qpair failed and we were unable to recover it. 00:24:10.400 [2024-04-24 22:15:52.586064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.586195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.586222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.400 qpair failed and we were unable to recover it. 00:24:10.400 [2024-04-24 22:15:52.586372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.586513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.586542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.400 qpair failed and we were unable to recover it. 00:24:10.400 [2024-04-24 22:15:52.586730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.586856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.586884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.400 qpair failed and we were unable to recover it. 00:24:10.400 [2024-04-24 22:15:52.587069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.587256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.587283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.400 qpair failed and we were unable to recover it. 00:24:10.400 [2024-04-24 22:15:52.587409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.587552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.587579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.400 qpair failed and we were unable to recover it. 00:24:10.400 [2024-04-24 22:15:52.587740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.587918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.587945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.400 qpair failed and we were unable to recover it. 00:24:10.400 [2024-04-24 22:15:52.588118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.588289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.588317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.400 qpair failed and we were unable to recover it. 00:24:10.400 [2024-04-24 22:15:52.588461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.588595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.588623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.400 qpair failed and we were unable to recover it. 00:24:10.400 [2024-04-24 22:15:52.588790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.588958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.588986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.400 qpair failed and we were unable to recover it. 00:24:10.400 [2024-04-24 22:15:52.589106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.589259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.589286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.400 qpair failed and we were unable to recover it. 00:24:10.400 [2024-04-24 22:15:52.589461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.589616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.589644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.400 qpair failed and we were unable to recover it. 00:24:10.400 [2024-04-24 22:15:52.589804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.589956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.589983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.400 qpair failed and we were unable to recover it. 00:24:10.400 [2024-04-24 22:15:52.590167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.590323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.590349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.400 qpair failed and we were unable to recover it. 00:24:10.400 [2024-04-24 22:15:52.590512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.590638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.590665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.400 qpair failed and we were unable to recover it. 00:24:10.400 [2024-04-24 22:15:52.590825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.591009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.591036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.400 qpair failed and we were unable to recover it. 00:24:10.400 [2024-04-24 22:15:52.591186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.591322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.591349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.400 qpair failed and we were unable to recover it. 00:24:10.400 [2024-04-24 22:15:52.591514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.591672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.591700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.400 qpair failed and we were unable to recover it. 00:24:10.400 [2024-04-24 22:15:52.591870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.592053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.592080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.400 qpair failed and we were unable to recover it. 00:24:10.400 [2024-04-24 22:15:52.592213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.592348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.592376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.400 qpair failed and we were unable to recover it. 00:24:10.400 [2024-04-24 22:15:52.592566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.592716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.592744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.400 qpair failed and we were unable to recover it. 00:24:10.400 [2024-04-24 22:15:52.592911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.593038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.593066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.400 qpair failed and we were unable to recover it. 00:24:10.400 [2024-04-24 22:15:52.593223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.593376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.593413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.400 qpair failed and we were unable to recover it. 00:24:10.400 [2024-04-24 22:15:52.593602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.593732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.400 [2024-04-24 22:15:52.593759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.400 qpair failed and we were unable to recover it. 00:24:10.400 [2024-04-24 22:15:52.593948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.594108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.594135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.401 qpair failed and we were unable to recover it. 00:24:10.401 [2024-04-24 22:15:52.594266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.594453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.594482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.401 qpair failed and we were unable to recover it. 00:24:10.401 [2024-04-24 22:15:52.594651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.594811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.594839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.401 qpair failed and we were unable to recover it. 00:24:10.401 [2024-04-24 22:15:52.595002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.595158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.595185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.401 qpair failed and we were unable to recover it. 00:24:10.401 [2024-04-24 22:15:52.595374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.595517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.595545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.401 qpair failed and we were unable to recover it. 00:24:10.401 [2024-04-24 22:15:52.595728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.595893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.595921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.401 qpair failed and we were unable to recover it. 00:24:10.401 [2024-04-24 22:15:52.596079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.596235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.596262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.401 qpair failed and we were unable to recover it. 00:24:10.401 [2024-04-24 22:15:52.596425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.596590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.596617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.401 qpair failed and we were unable to recover it. 00:24:10.401 [2024-04-24 22:15:52.596781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.596941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.596968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.401 qpair failed and we were unable to recover it. 00:24:10.401 [2024-04-24 22:15:52.597126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.597283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.597310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.401 qpair failed and we were unable to recover it. 00:24:10.401 [2024-04-24 22:15:52.597499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.597667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.597694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.401 qpair failed and we were unable to recover it. 00:24:10.401 [2024-04-24 22:15:52.597841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.597984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.598011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.401 qpair failed and we were unable to recover it. 00:24:10.401 [2024-04-24 22:15:52.598164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.598322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.598349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.401 qpair failed and we were unable to recover it. 00:24:10.401 [2024-04-24 22:15:52.598503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.598688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.598715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.401 qpair failed and we were unable to recover it. 00:24:10.401 [2024-04-24 22:15:52.598899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.599025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.599052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.401 qpair failed and we were unable to recover it. 00:24:10.401 [2024-04-24 22:15:52.599239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.599365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.599401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.401 qpair failed and we were unable to recover it. 00:24:10.401 [2024-04-24 22:15:52.599569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.599732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.599759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.401 qpair failed and we were unable to recover it. 00:24:10.401 [2024-04-24 22:15:52.599940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.600067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.600094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.401 qpair failed and we were unable to recover it. 00:24:10.401 [2024-04-24 22:15:52.600247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.600420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.600448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.401 qpair failed and we were unable to recover it. 00:24:10.401 [2024-04-24 22:15:52.600603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.600788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.600815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.401 qpair failed and we were unable to recover it. 00:24:10.401 [2024-04-24 22:15:52.601011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.601142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.601169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.401 qpair failed and we were unable to recover it. 00:24:10.401 [2024-04-24 22:15:52.601326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.601474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.601503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.401 qpair failed and we were unable to recover it. 00:24:10.401 [2024-04-24 22:15:52.601664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.601821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.601848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.401 qpair failed and we were unable to recover it. 00:24:10.401 [2024-04-24 22:15:52.602000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.602130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.401 [2024-04-24 22:15:52.602157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.401 qpair failed and we were unable to recover it. 00:24:10.402 [2024-04-24 22:15:52.602311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.602463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.602491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.402 qpair failed and we were unable to recover it. 00:24:10.402 [2024-04-24 22:15:52.602613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.602773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.602801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.402 qpair failed and we were unable to recover it. 00:24:10.402 [2024-04-24 22:15:52.602956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.603087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.603115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.402 qpair failed and we were unable to recover it. 00:24:10.402 [2024-04-24 22:15:52.603299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.603490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.603518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.402 qpair failed and we were unable to recover it. 00:24:10.402 [2024-04-24 22:15:52.603643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.603813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.603839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.402 qpair failed and we were unable to recover it. 00:24:10.402 [2024-04-24 22:15:52.603997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.604185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.604211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.402 qpair failed and we were unable to recover it. 00:24:10.402 [2024-04-24 22:15:52.604401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.604558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.604585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.402 qpair failed and we were unable to recover it. 00:24:10.402 [2024-04-24 22:15:52.604729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.604888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.604915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.402 qpair failed and we were unable to recover it. 00:24:10.402 [2024-04-24 22:15:52.605050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.605236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.605263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.402 qpair failed and we were unable to recover it. 00:24:10.402 [2024-04-24 22:15:52.605428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.605587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.605614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.402 qpair failed and we were unable to recover it. 00:24:10.402 [2024-04-24 22:15:52.605773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.605931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.605958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.402 qpair failed and we were unable to recover it. 00:24:10.402 [2024-04-24 22:15:52.606097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.606251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.606284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.402 qpair failed and we were unable to recover it. 00:24:10.402 [2024-04-24 22:15:52.606445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.606578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.606605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.402 qpair failed and we were unable to recover it. 00:24:10.402 [2024-04-24 22:15:52.606767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.606902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.606929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.402 qpair failed and we were unable to recover it. 00:24:10.402 [2024-04-24 22:15:52.607054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.607212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.607239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.402 qpair failed and we were unable to recover it. 00:24:10.402 [2024-04-24 22:15:52.607403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.607555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.607581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.402 qpair failed and we were unable to recover it. 00:24:10.402 [2024-04-24 22:15:52.607741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.607899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.607925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.402 qpair failed and we were unable to recover it. 00:24:10.402 [2024-04-24 22:15:52.608078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.608234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.608260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.402 qpair failed and we were unable to recover it. 00:24:10.402 [2024-04-24 22:15:52.608421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.608599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.608626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.402 qpair failed and we were unable to recover it. 00:24:10.402 [2024-04-24 22:15:52.608785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.608936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.608962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.402 qpair failed and we were unable to recover it. 00:24:10.402 [2024-04-24 22:15:52.609124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.609280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.609307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.402 qpair failed and we were unable to recover it. 00:24:10.402 [2024-04-24 22:15:52.609435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.609602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.609634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.402 qpair failed and we were unable to recover it. 00:24:10.402 [2024-04-24 22:15:52.609794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.609922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.609949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.402 qpair failed and we were unable to recover it. 00:24:10.402 [2024-04-24 22:15:52.610107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.610291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.610319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.402 qpair failed and we were unable to recover it. 00:24:10.402 [2024-04-24 22:15:52.610448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.610578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.610605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.402 qpair failed and we were unable to recover it. 00:24:10.402 [2024-04-24 22:15:52.610782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.610971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.610998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.402 qpair failed and we were unable to recover it. 00:24:10.402 [2024-04-24 22:15:52.611179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.611337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.611364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.402 qpair failed and we were unable to recover it. 00:24:10.402 [2024-04-24 22:15:52.611525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.611655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.402 [2024-04-24 22:15:52.611682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.402 qpair failed and we were unable to recover it. 00:24:10.402 [2024-04-24 22:15:52.611842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.403 [2024-04-24 22:15:52.612002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.403 [2024-04-24 22:15:52.612029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.403 qpair failed and we were unable to recover it. 00:24:10.403 [2024-04-24 22:15:52.612214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.403 [2024-04-24 22:15:52.612357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.403 [2024-04-24 22:15:52.612384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.403 qpair failed and we were unable to recover it. 00:24:10.403 [2024-04-24 22:15:52.612582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.403 [2024-04-24 22:15:52.612741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.403 [2024-04-24 22:15:52.612768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.403 qpair failed and we were unable to recover it. 00:24:10.403 [2024-04-24 22:15:52.612946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.403 [2024-04-24 22:15:52.613104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.403 [2024-04-24 22:15:52.613135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.403 qpair failed and we were unable to recover it. 00:24:10.403 [2024-04-24 22:15:52.613291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.403 [2024-04-24 22:15:52.613420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.403 [2024-04-24 22:15:52.613448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.403 qpair failed and we were unable to recover it. 00:24:10.403 [2024-04-24 22:15:52.613624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.403 [2024-04-24 22:15:52.613809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.403 [2024-04-24 22:15:52.613836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.403 qpair failed and we were unable to recover it. 00:24:10.403 [2024-04-24 22:15:52.613997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.614152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.614179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.695 qpair failed and we were unable to recover it. 00:24:10.695 [2024-04-24 22:15:52.614364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.614493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.614521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.695 qpair failed and we were unable to recover it. 00:24:10.695 [2024-04-24 22:15:52.614677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.614830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.614858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.695 qpair failed and we were unable to recover it. 00:24:10.695 [2024-04-24 22:15:52.615044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.615202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.615228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.695 qpair failed and we were unable to recover it. 00:24:10.695 [2024-04-24 22:15:52.615382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.615549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.615576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.695 qpair failed and we were unable to recover it. 00:24:10.695 [2024-04-24 22:15:52.615706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.615891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.615917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.695 qpair failed and we were unable to recover it. 00:24:10.695 [2024-04-24 22:15:52.616076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.616241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.616268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.695 qpair failed and we were unable to recover it. 00:24:10.695 [2024-04-24 22:15:52.616409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.616603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.616634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.695 qpair failed and we were unable to recover it. 00:24:10.695 [2024-04-24 22:15:52.616819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.616946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.616973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.695 qpair failed and we were unable to recover it. 00:24:10.695 [2024-04-24 22:15:52.617128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.617307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.617334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.695 qpair failed and we were unable to recover it. 00:24:10.695 [2024-04-24 22:15:52.617501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.617623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.617650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.695 qpair failed and we were unable to recover it. 00:24:10.695 [2024-04-24 22:15:52.617778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.617962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.617989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.695 qpair failed and we were unable to recover it. 00:24:10.695 [2024-04-24 22:15:52.618144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.618301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.618328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.695 qpair failed and we were unable to recover it. 00:24:10.695 [2024-04-24 22:15:52.618491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.618653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.618680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.695 qpair failed and we were unable to recover it. 00:24:10.695 [2024-04-24 22:15:52.618857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.619013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.619040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.695 qpair failed and we were unable to recover it. 00:24:10.695 [2024-04-24 22:15:52.619225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.619364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.695 [2024-04-24 22:15:52.619391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.695 qpair failed and we were unable to recover it. 00:24:10.695 [2024-04-24 22:15:52.619555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.619711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.619737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.696 qpair failed and we were unable to recover it. 00:24:10.696 [2024-04-24 22:15:52.619925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.620102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.620129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.696 qpair failed and we were unable to recover it. 00:24:10.696 [2024-04-24 22:15:52.620319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.620495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.620524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.696 qpair failed and we were unable to recover it. 00:24:10.696 [2024-04-24 22:15:52.620658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.620838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.620865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.696 qpair failed and we were unable to recover it. 00:24:10.696 [2024-04-24 22:15:52.621024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.621188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.621215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.696 qpair failed and we were unable to recover it. 00:24:10.696 [2024-04-24 22:15:52.621409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.621547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.621574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.696 qpair failed and we were unable to recover it. 00:24:10.696 [2024-04-24 22:15:52.621724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.621908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.621935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.696 qpair failed and we were unable to recover it. 00:24:10.696 [2024-04-24 22:15:52.622122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.622245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.622272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.696 qpair failed and we were unable to recover it. 00:24:10.696 [2024-04-24 22:15:52.622425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.622554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.622582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.696 qpair failed and we were unable to recover it. 00:24:10.696 [2024-04-24 22:15:52.622739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.622900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.622927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.696 qpair failed and we were unable to recover it. 00:24:10.696 [2024-04-24 22:15:52.623107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.623263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.623290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.696 qpair failed and we were unable to recover it. 00:24:10.696 [2024-04-24 22:15:52.623451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.623612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.623639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.696 qpair failed and we were unable to recover it. 00:24:10.696 [2024-04-24 22:15:52.623776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.623964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.623991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.696 qpair failed and we were unable to recover it. 00:24:10.696 [2024-04-24 22:15:52.624159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.624321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.624349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.696 qpair failed and we were unable to recover it. 00:24:10.696 [2024-04-24 22:15:52.624519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.624686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.624713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.696 qpair failed and we were unable to recover it. 00:24:10.696 [2024-04-24 22:15:52.624874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.625025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.625052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.696 qpair failed and we were unable to recover it. 00:24:10.696 [2024-04-24 22:15:52.625217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.696 [2024-04-24 22:15:52.625415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.625445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.697 qpair failed and we were unable to recover it. 00:24:10.697 [2024-04-24 22:15:52.625603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.625741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.625768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.697 qpair failed and we were unable to recover it. 00:24:10.697 [2024-04-24 22:15:52.625906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.626062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.626090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.697 qpair failed and we were unable to recover it. 00:24:10.697 [2024-04-24 22:15:52.626215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.626404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.626433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.697 qpair failed and we were unable to recover it. 00:24:10.697 [2024-04-24 22:15:52.626567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.626754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.626781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.697 qpair failed and we were unable to recover it. 00:24:10.697 [2024-04-24 22:15:52.626975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.627102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.627129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.697 qpair failed and we were unable to recover it. 00:24:10.697 [2024-04-24 22:15:52.627322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.627448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.627476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.697 qpair failed and we were unable to recover it. 00:24:10.697 [2024-04-24 22:15:52.627606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.627767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.627794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.697 qpair failed and we were unable to recover it. 00:24:10.697 [2024-04-24 22:15:52.627953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.628120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.628147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.697 qpair failed and we were unable to recover it. 00:24:10.697 [2024-04-24 22:15:52.628299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.628421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.628449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.697 qpair failed and we were unable to recover it. 00:24:10.697 [2024-04-24 22:15:52.628615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.628779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.628806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.697 qpair failed and we were unable to recover it. 00:24:10.697 [2024-04-24 22:15:52.628963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.629092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.629119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.697 qpair failed and we were unable to recover it. 00:24:10.697 [2024-04-24 22:15:52.629246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.629409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.629437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.697 qpair failed and we were unable to recover it. 00:24:10.697 [2024-04-24 22:15:52.629586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.629732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.629760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.697 qpair failed and we were unable to recover it. 00:24:10.697 [2024-04-24 22:15:52.629945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.630112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.630139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.697 qpair failed and we were unable to recover it. 00:24:10.697 [2024-04-24 22:15:52.630308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.630474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.630503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.697 qpair failed and we were unable to recover it. 00:24:10.697 [2024-04-24 22:15:52.630671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.630854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.697 [2024-04-24 22:15:52.630881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.698 qpair failed and we were unable to recover it. 00:24:10.698 [2024-04-24 22:15:52.631012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.631028] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:24:10.698 [2024-04-24 22:15:52.631105] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.698 [2024-04-24 22:15:52.631136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.631163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.698 qpair failed and we were unable to recover it. 00:24:10.698 [2024-04-24 22:15:52.631295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.631467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.631495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.698 qpair failed and we were unable to recover it. 00:24:10.698 [2024-04-24 22:15:52.631624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.631744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.631770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.698 qpair failed and we were unable to recover it. 00:24:10.698 [2024-04-24 22:15:52.631956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.632078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.632105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.698 qpair failed and we were unable to recover it. 00:24:10.698 [2024-04-24 22:15:52.632262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.632427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.632455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.698 qpair failed and we were unable to recover it. 00:24:10.698 [2024-04-24 22:15:52.632641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.632815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.632842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.698 qpair failed and we were unable to recover it. 00:24:10.698 [2024-04-24 22:15:52.632992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.633151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.633178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.698 qpair failed and we were unable to recover it. 00:24:10.698 [2024-04-24 22:15:52.633316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.633477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.633505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.698 qpair failed and we were unable to recover it. 00:24:10.698 [2024-04-24 22:15:52.633698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.633825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.633853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.698 qpair failed and we were unable to recover it. 00:24:10.698 [2024-04-24 22:15:52.634040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.634190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.634218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.698 qpair failed and we were unable to recover it. 00:24:10.698 [2024-04-24 22:15:52.634344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.634507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.634535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.698 qpair failed and we were unable to recover it. 00:24:10.698 [2024-04-24 22:15:52.634696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.634855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.634882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.698 qpair failed and we were unable to recover it. 00:24:10.698 [2024-04-24 22:15:52.635036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.635222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.635250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.698 qpair failed and we were unable to recover it. 00:24:10.698 [2024-04-24 22:15:52.635385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.635519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.635547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.698 qpair failed and we were unable to recover it. 00:24:10.698 [2024-04-24 22:15:52.635732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.635885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.635912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.698 qpair failed and we were unable to recover it. 00:24:10.698 [2024-04-24 22:15:52.636065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.636209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.636236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.698 qpair failed and we were unable to recover it. 00:24:10.698 [2024-04-24 22:15:52.636424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.636548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.698 [2024-04-24 22:15:52.636576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.698 qpair failed and we were unable to recover it. 00:24:10.698 [2024-04-24 22:15:52.636763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.636894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.636922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.637090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.637270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.637298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.637454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.637612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.637639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.637803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.637960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.637987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.638115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.638236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.638263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.638419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.638599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.638626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.638816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.639003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.639030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.639159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.639318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.639346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.639535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.639718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.639746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.639911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.640076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.640104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.640265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.640425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.640454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.640619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.640757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.640785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.640978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.641138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.641165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.641326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.641487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.641516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.641651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.641819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.641847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.641969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.642154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.642181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.642366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.642551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.642579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.642750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.642905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.642932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.643056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.643215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.643242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.643408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.643571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.643598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.643753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.643917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.643944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.644104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.644269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.644297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.644428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.644581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.644608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.644776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.644957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.644983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.645172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.645359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.645386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.645527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.645712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.645739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.645900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.646047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.646075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.646206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.646390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.646426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.646587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.646772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.646800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.646957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.647139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.699 [2024-04-24 22:15:52.647166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.699 qpair failed and we were unable to recover it. 00:24:10.699 [2024-04-24 22:15:52.647327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.647516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.647545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.647710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.647874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.647902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.648092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.648278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.648305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.648470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.648657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.648684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.648836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.649008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.649035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.649154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.649342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.649369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.649532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.649721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.649749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.649904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.650094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.650121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.650285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.650435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.650463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.650598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.650770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.650797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.650988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.651161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.651188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.651355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.651509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.651537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.651678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.651830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.651858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.652036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.652160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.652187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.652373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.652543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.652572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.652734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.652884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.652911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.653089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.653249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.653276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.653462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.653636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.653663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.653820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.653979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.654006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.654174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.654324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.654351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.654514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.654670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.654697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.654824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.655022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.655055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.655216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.655406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.655434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.655625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.655812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.655839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.655994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.656187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.656214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.656376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.656530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.656559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.656712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.656857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.656884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.657044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.657204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.657231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.657416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.657571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.657598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.657784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.657937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.657964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.700 qpair failed and we were unable to recover it. 00:24:10.700 [2024-04-24 22:15:52.658119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.700 [2024-04-24 22:15:52.658255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.658282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.701 qpair failed and we were unable to recover it. 00:24:10.701 [2024-04-24 22:15:52.658438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.658597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.658628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.701 qpair failed and we were unable to recover it. 00:24:10.701 [2024-04-24 22:15:52.658803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.658965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.658992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.701 qpair failed and we were unable to recover it. 00:24:10.701 [2024-04-24 22:15:52.659142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.659309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.659336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.701 qpair failed and we were unable to recover it. 00:24:10.701 [2024-04-24 22:15:52.659493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.659667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.659694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.701 qpair failed and we were unable to recover it. 00:24:10.701 [2024-04-24 22:15:52.659881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.660048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.660075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.701 qpair failed and we were unable to recover it. 00:24:10.701 [2024-04-24 22:15:52.660228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.660354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.660381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.701 qpair failed and we were unable to recover it. 00:24:10.701 [2024-04-24 22:15:52.660525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.660648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.660675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.701 qpair failed and we were unable to recover it. 00:24:10.701 [2024-04-24 22:15:52.660798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.660982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.661009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.701 qpair failed and we were unable to recover it. 00:24:10.701 [2024-04-24 22:15:52.661168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.661293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.661320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.701 qpair failed and we were unable to recover it. 00:24:10.701 [2024-04-24 22:15:52.661511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.661671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.661698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.701 qpair failed and we were unable to recover it. 00:24:10.701 [2024-04-24 22:15:52.661892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.662040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.662072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.701 qpair failed and we were unable to recover it. 00:24:10.701 [2024-04-24 22:15:52.662245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.662406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.662434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.701 qpair failed and we were unable to recover it. 00:24:10.701 [2024-04-24 22:15:52.662573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.662697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.662724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.701 qpair failed and we were unable to recover it. 00:24:10.701 [2024-04-24 22:15:52.662910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.663061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.663089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.701 qpair failed and we were unable to recover it. 00:24:10.701 [2024-04-24 22:15:52.663270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.663460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.663488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.701 qpair failed and we were unable to recover it. 00:24:10.701 [2024-04-24 22:15:52.663624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.663752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.663779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.701 qpair failed and we were unable to recover it. 00:24:10.701 [2024-04-24 22:15:52.663967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.664096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.664122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.701 qpair failed and we were unable to recover it. 00:24:10.701 [2024-04-24 22:15:52.664256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.664412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.664440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.701 qpair failed and we were unable to recover it. 00:24:10.701 [2024-04-24 22:15:52.664575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.664697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.701 [2024-04-24 22:15:52.664724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.702 qpair failed and we were unable to recover it. 00:24:10.702 [2024-04-24 22:15:52.664855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.665007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.665033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.702 qpair failed and we were unable to recover it. 00:24:10.702 [2024-04-24 22:15:52.665189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.665319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.665346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.702 qpair failed and we were unable to recover it. 00:24:10.702 [2024-04-24 22:15:52.665489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.665642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.665669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.702 qpair failed and we were unable to recover it. 00:24:10.702 [2024-04-24 22:15:52.665824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.665962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.665990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.702 qpair failed and we were unable to recover it. 00:24:10.702 [2024-04-24 22:15:52.666125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.666309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.666337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.702 qpair failed and we were unable to recover it. 00:24:10.702 [2024-04-24 22:15:52.666495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.666656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.666683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.702 qpair failed and we were unable to recover it. 00:24:10.702 [2024-04-24 22:15:52.666839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.666962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.666989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.702 qpair failed and we were unable to recover it. 00:24:10.702 [2024-04-24 22:15:52.667169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.667325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.667352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.702 qpair failed and we were unable to recover it. 00:24:10.702 [2024-04-24 22:15:52.667477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.667639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.667666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.702 qpair failed and we were unable to recover it. 00:24:10.702 [2024-04-24 22:15:52.667854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.668006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.668033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.702 qpair failed and we were unable to recover it. 00:24:10.702 [2024-04-24 22:15:52.668198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.668329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.668356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.702 qpair failed and we were unable to recover it. 00:24:10.702 [2024-04-24 22:15:52.668556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.668720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.668747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.702 qpair failed and we were unable to recover it. 00:24:10.702 [2024-04-24 22:15:52.668912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.669076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.669103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.702 qpair failed and we were unable to recover it. 00:24:10.702 [2024-04-24 22:15:52.669264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.669419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.669447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.702 qpair failed and we were unable to recover it. 00:24:10.702 [2024-04-24 22:15:52.669602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.669728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.669760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.702 qpair failed and we were unable to recover it. 00:24:10.702 [2024-04-24 22:15:52.669894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.670073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.670100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.702 qpair failed and we were unable to recover it. 00:24:10.702 [2024-04-24 22:15:52.670284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.670422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.670450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.702 qpair failed and we were unable to recover it. 00:24:10.702 [2024-04-24 22:15:52.670647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.670784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.670811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.702 qpair failed and we were unable to recover it. 00:24:10.702 [2024-04-24 22:15:52.670974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.671146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.671173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.702 qpair failed and we were unable to recover it. 00:24:10.702 [2024-04-24 22:15:52.671342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.671485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.671514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.702 qpair failed and we were unable to recover it. 00:24:10.702 [2024-04-24 22:15:52.671699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.671886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.671913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.702 qpair failed and we were unable to recover it. 00:24:10.702 [2024-04-24 22:15:52.672079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.672224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.672251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.702 qpair failed and we were unable to recover it. 00:24:10.702 [2024-04-24 22:15:52.672419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.672615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.672643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.702 qpair failed and we were unable to recover it. 00:24:10.702 [2024-04-24 22:15:52.672832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.673045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.673072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.702 qpair failed and we were unable to recover it. 00:24:10.702 [2024-04-24 22:15:52.673294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.673474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.673502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.702 qpair failed and we were unable to recover it. 00:24:10.702 [2024-04-24 22:15:52.673679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.702 [2024-04-24 22:15:52.673885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.673911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.703 qpair failed and we were unable to recover it. 00:24:10.703 [2024-04-24 22:15:52.674137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.674306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.674333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.703 qpair failed and we were unable to recover it. 00:24:10.703 [2024-04-24 22:15:52.674513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.674689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.674716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.703 qpair failed and we were unable to recover it. 00:24:10.703 [2024-04-24 22:15:52.674922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.675130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.675164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.703 qpair failed and we were unable to recover it. 00:24:10.703 [2024-04-24 22:15:52.675401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.675565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.675593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.703 qpair failed and we were unable to recover it. 00:24:10.703 [2024-04-24 22:15:52.675777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.675948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.675974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.703 qpair failed and we were unable to recover it. 00:24:10.703 [2024-04-24 22:15:52.676180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.703 [2024-04-24 22:15:52.676379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.676428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.703 qpair failed and we were unable to recover it. 00:24:10.703 [2024-04-24 22:15:52.676611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.676825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.676852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.703 qpair failed and we were unable to recover it. 00:24:10.703 [2024-04-24 22:15:52.677062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.677271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.677298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.703 qpair failed and we were unable to recover it. 00:24:10.703 [2024-04-24 22:15:52.677487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.677641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.677669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.703 qpair failed and we were unable to recover it. 00:24:10.703 [2024-04-24 22:15:52.677845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.678059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.678089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.703 qpair failed and we were unable to recover it. 00:24:10.703 [2024-04-24 22:15:52.678288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.678461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.678490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.703 qpair failed and we were unable to recover it. 00:24:10.703 [2024-04-24 22:15:52.678651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.678821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.678848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.703 qpair failed and we were unable to recover it. 00:24:10.703 [2024-04-24 22:15:52.679024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.679207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.679233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.703 qpair failed and we were unable to recover it. 00:24:10.703 [2024-04-24 22:15:52.679368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.679571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.679609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.703 qpair failed and we were unable to recover it. 00:24:10.703 [2024-04-24 22:15:52.679783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.679944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.679971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.703 qpair failed and we were unable to recover it. 00:24:10.703 [2024-04-24 22:15:52.680157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.680354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.680380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.703 qpair failed and we were unable to recover it. 00:24:10.703 [2024-04-24 22:15:52.680549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.680707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.680735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.703 qpair failed and we were unable to recover it. 00:24:10.703 [2024-04-24 22:15:52.680941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.681065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.681093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.703 qpair failed and we were unable to recover it. 00:24:10.703 [2024-04-24 22:15:52.681280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.681475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.681504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.703 qpair failed and we were unable to recover it. 00:24:10.703 [2024-04-24 22:15:52.681715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.681896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.681924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.703 qpair failed and we were unable to recover it. 00:24:10.703 [2024-04-24 22:15:52.682142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.682313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.682340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.703 qpair failed and we were unable to recover it. 00:24:10.703 [2024-04-24 22:15:52.682515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.682690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.682717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.703 qpair failed and we were unable to recover it. 00:24:10.703 [2024-04-24 22:15:52.682926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.683114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.683141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.703 qpair failed and we were unable to recover it. 00:24:10.703 [2024-04-24 22:15:52.683334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.683508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.683536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.703 qpair failed and we were unable to recover it. 00:24:10.703 [2024-04-24 22:15:52.683721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.683887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.683914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.703 qpair failed and we were unable to recover it. 00:24:10.703 [2024-04-24 22:15:52.684138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.684354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.684381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.703 qpair failed and we were unable to recover it. 00:24:10.703 [2024-04-24 22:15:52.684568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.684766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.684793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.703 qpair failed and we were unable to recover it. 00:24:10.703 [2024-04-24 22:15:52.684964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.703 [2024-04-24 22:15:52.685175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.685202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.704 qpair failed and we were unable to recover it. 00:24:10.704 [2024-04-24 22:15:52.685379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.685560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.685587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.704 qpair failed and we were unable to recover it. 00:24:10.704 [2024-04-24 22:15:52.685785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.685982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.686009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.704 qpair failed and we were unable to recover it. 00:24:10.704 [2024-04-24 22:15:52.686223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.686354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.686380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.704 qpair failed and we were unable to recover it. 00:24:10.704 [2024-04-24 22:15:52.686599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.686809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.686836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.704 qpair failed and we were unable to recover it. 00:24:10.704 [2024-04-24 22:15:52.686958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.687117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.687150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.704 qpair failed and we were unable to recover it. 00:24:10.704 [2024-04-24 22:15:52.687352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.687549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.687587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.704 qpair failed and we were unable to recover it. 00:24:10.704 [2024-04-24 22:15:52.687848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.688019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.688046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.704 qpair failed and we were unable to recover it. 00:24:10.704 [2024-04-24 22:15:52.688252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.688460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.688488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.704 qpair failed and we were unable to recover it. 00:24:10.704 [2024-04-24 22:15:52.688717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.688869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.688896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.704 qpair failed and we were unable to recover it. 00:24:10.704 [2024-04-24 22:15:52.689114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.689273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.689300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.704 qpair failed and we were unable to recover it. 00:24:10.704 [2024-04-24 22:15:52.689495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.689632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.689660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.704 qpair failed and we were unable to recover it. 00:24:10.704 [2024-04-24 22:15:52.689834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.690007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.690034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.704 qpair failed and we were unable to recover it. 00:24:10.704 [2024-04-24 22:15:52.690230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.690437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.690465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.704 qpair failed and we were unable to recover it. 00:24:10.704 [2024-04-24 22:15:52.690661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.690860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.690887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.704 qpair failed and we were unable to recover it. 00:24:10.704 [2024-04-24 22:15:52.691055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.691200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.691227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.704 qpair failed and we were unable to recover it. 00:24:10.704 [2024-04-24 22:15:52.691435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.691599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.691626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.704 qpair failed and we were unable to recover it. 00:24:10.704 [2024-04-24 22:15:52.691824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.691993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.692020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.704 qpair failed and we were unable to recover it. 00:24:10.704 [2024-04-24 22:15:52.692183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.692374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.692410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.704 qpair failed and we were unable to recover it. 00:24:10.704 [2024-04-24 22:15:52.692588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.692751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.692778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.704 qpair failed and we were unable to recover it. 00:24:10.704 [2024-04-24 22:15:52.692976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.693143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.693170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.704 qpair failed and we were unable to recover it. 00:24:10.704 [2024-04-24 22:15:52.693368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.693576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.693604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.704 qpair failed and we were unable to recover it. 00:24:10.704 [2024-04-24 22:15:52.693810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.693965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.693992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.704 qpair failed and we were unable to recover it. 00:24:10.704 [2024-04-24 22:15:52.694188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.694404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.694433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.704 qpair failed and we were unable to recover it. 00:24:10.704 [2024-04-24 22:15:52.694620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.694800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.694828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.704 qpair failed and we were unable to recover it. 00:24:10.704 [2024-04-24 22:15:52.695014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.695140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.695175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.704 qpair failed and we were unable to recover it. 00:24:10.704 [2024-04-24 22:15:52.695347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.695517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.695545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.704 qpair failed and we were unable to recover it. 00:24:10.704 [2024-04-24 22:15:52.695685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.695855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.695882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.704 qpair failed and we were unable to recover it. 00:24:10.704 [2024-04-24 22:15:52.696049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.696183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.704 [2024-04-24 22:15:52.696210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.704 qpair failed and we were unable to recover it. 00:24:10.704 [2024-04-24 22:15:52.696423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.696569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.696597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.696793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.696963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.696990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.697240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.697407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.697435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.697639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.697771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.697798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.698010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.698165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.698192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.698322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.698531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.698559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.698726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.698932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.698959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.699164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.699308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.699335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.699501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.699690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.699716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.699887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.700047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.700074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.700254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.700422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.700450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.700662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.700857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.700884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.701046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.701245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.701272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.701458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.701671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.701698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.701937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.702134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.702161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.702336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.702509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.702537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.702663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.702875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.702901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.703144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.703342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.703369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.703593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.703762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.703790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.703954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.704096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.704123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.704292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.704499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.704528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.704677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.704852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.704879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.705064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.705199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.705226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.705413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.705599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.705626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.705838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.705995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.706022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.706238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.706428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.706457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.706589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.706748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.706776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.706969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.707185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.707213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.707457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.707627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.707654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.707851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.708015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.708042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.708211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.708453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.708482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.708659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.708863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.708890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.705 qpair failed and we were unable to recover it. 00:24:10.705 [2024-04-24 22:15:52.709076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.705 [2024-04-24 22:15:52.709274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.709301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.709479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.709645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.709673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.709872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.710056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.710083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.710287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.710444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.710472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.710653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.710821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.710849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.711011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.711192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.711219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.711439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.711623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.711651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.711857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.712016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.712043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.712195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.712423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.712452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.712665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.712838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.712865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.713038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.713245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.713273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.713448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.713623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.713651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.713815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.713988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.714016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.714187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.714400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.714428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.714603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.714780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.714807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.715012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.715197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.715224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.715376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.715526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.715553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.715750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.715936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.715963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.716095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.716304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.716331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.716482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.716646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.716673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.716838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.717025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.717052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.717219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.717408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.717436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.717599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.717774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.717801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.717965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.718161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.718188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.718354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.718567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.718595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.718768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.718951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.718978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.719178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.719351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.719378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.719573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.719779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.719807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.720005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.720203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.720236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.720415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.720592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.720619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.720780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.720975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.721002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.721205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.721374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.721418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.721631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.721800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.721827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.722034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.722202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.722229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.722446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.722617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.722645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.722821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.722985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.723012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.723157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.723357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.723384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.723562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.723735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.723762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.723970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.724141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.724172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.724370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.724579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.724607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.724826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.724978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.725004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.725192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.725376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.725412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.725586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.725795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.725822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.725988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.726157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.726184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.726408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.726614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.726641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.706 qpair failed and we were unable to recover it. 00:24:10.706 [2024-04-24 22:15:52.726828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.727000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.706 [2024-04-24 22:15:52.727027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.727183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.727348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.727374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.727582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.727766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.727792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.727984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.728191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.728223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.728401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.728554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.728581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.728784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.728968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.728995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.729153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.729311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.729339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.729487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.729687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.729714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.729883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.730094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.730121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.730331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.730496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.730523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.730681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.730860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.730886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.731048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.731177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.731204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.731356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.731547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.731574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.731767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.731967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.731999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.732170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.732455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.732483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.732678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.732815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.732842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.733009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.733174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.733201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.733338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.733538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.733566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.733777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.733976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.734003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.734216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.734428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.734456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.734612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.734762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.734789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.734999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.735188] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:10.707 [2024-04-24 22:15:52.735201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.735229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.735401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.735588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.735615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.735829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.735991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.736025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.736197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.736408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.736436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.736581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.736774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.736801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.736998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.737126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.737153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.737361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.737601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.737629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.737798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.737993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.738020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.738204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.738357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.738384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.738568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.738716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.738743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.738912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.739107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.739134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.739327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.739534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.739563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.739704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.739885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.739917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.740084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.740290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.740318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.740493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.740713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.740740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.740946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.741100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.741128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.741286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.741472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.741500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.741671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.741868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.741895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.742068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.742239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.742266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.742472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.742698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.742725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.742984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.743165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.743192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.743388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.743595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.743622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.743821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.743984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.744016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.744214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.744426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.744455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.707 [2024-04-24 22:15:52.744642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.744857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.707 [2024-04-24 22:15:52.744884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.707 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.745046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.745216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.745243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.745449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.745661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.745689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.745961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.746145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.746172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.746380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.746510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.746537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.746696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.746871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.746898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.747079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.747286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.747313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.747461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.747606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.747633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.747816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.747969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.748001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.748197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.748402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.748430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.748656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.748850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.748877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.749095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.749295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.749323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.749459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.749596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.749623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.749802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.749980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.750007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.750177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.750346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.750374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.750562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.750721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.750748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.750926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.751081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.751108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.751317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.751481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.751509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.751709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.751905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.751932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.752131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.752348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.752375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.752614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.752791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.752818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.753018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.753139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.753166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.753358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.753544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.753572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.753723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.753929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.753955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.754210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.754378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.754415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.754621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.754788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.754815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.755012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.755182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.755210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.755382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.755574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.755601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.755770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.755973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.756001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.756160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.756344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.756371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.756554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.756729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.756756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.756943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.757118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.757145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.757316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.757514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.757543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.757737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.757908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.757935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.758129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.758338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.758365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.758538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.758719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.758746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.758946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.759115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.759142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.759314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.759510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.759538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.759782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.759919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.759946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.760148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.760350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.760377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.708 [2024-04-24 22:15:52.760595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.760767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.708 [2024-04-24 22:15:52.760794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.708 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.760970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.761149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.761176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.761337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.761530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.761558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.761762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.761941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.761969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.762118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.762288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.762315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.762482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.762682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.762709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.762908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.763072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.763099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.763307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.763470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.763497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.763702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.763900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.763926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.764137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.764300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.764327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.764503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.764676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.764703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.764872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.765066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.765094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.765306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.765485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.765513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.765700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.765893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.765920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.766131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.766267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.766294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.766458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.766626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.766653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.766820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.766986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.767013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.767148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.767284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.767311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.767490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.767651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.767678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.767833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.768042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.768069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.768316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.768525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.768553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.768722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.768906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.768933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.769121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.769256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.769283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.769458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.769652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.769679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.769874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.770071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.770098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.770346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.770516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.770544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.770743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.770877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.770904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.771063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.771239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.771266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.771468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.771635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.771662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.771871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.772021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.772047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.772201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.772387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.772421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.772619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.772754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.772781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.772983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.773102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.773129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.773294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.773463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.773491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.773681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.773888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.773915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.774129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.774292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.774319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.774516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.774679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.774706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.774834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.775004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.775031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.775206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.775422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.775450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.775662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.775826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.775853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.776024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.776166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.776193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.776370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.776569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.776608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.776752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.776883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.776910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.777160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.777456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.777484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.777688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.777835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.777862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.778040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.778209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.778237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.709 qpair failed and we were unable to recover it. 00:24:10.709 [2024-04-24 22:15:52.778360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.709 [2024-04-24 22:15:52.778489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.778517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.778686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.778894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.778921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.779130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.779342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.779368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.779551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.779766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.779794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.779963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.780123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.780151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.780339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.780469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.780498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.780673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.780871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.780899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.781096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.781280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.781308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.781513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.781683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.781710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.781905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.782113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.782140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.782344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.782497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.782526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.782687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.782847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.782873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.783073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.783233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.783260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.783432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.783620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.783648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.783838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.784049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.784076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.784266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.784431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.784459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.784680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.784838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.784865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.785041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.785228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.785255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.785441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.785609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.785636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.785844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.786028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.786055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.786201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.786408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.786438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.786647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.786803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.786830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.786999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.787170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.787197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.787424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.787637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.787665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.787841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.788047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.788074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.788302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.788440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.788468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.788641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.788800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.788827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.788983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.789148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.789175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.789336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.789551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.789579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.789731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.789893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.789920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.790123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.790319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.790346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.790488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.790664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.790691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.790854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.791058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.791094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.791298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.791454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.791483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.791687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.791897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.791924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.792079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.792248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.792275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.792487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.792664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.792691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.792877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.793040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.793067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.793238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.793380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.793415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.793621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.793788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.793815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.794000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.794146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.794174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.794352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.794537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.794566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.794747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.794931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.794958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.795081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.795267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.795302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.795514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.795714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.795741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.710 qpair failed and we were unable to recover it. 00:24:10.710 [2024-04-24 22:15:52.795952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.796148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.710 [2024-04-24 22:15:52.796175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.796374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.796554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.796582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.796790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.796951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.796979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.797180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.797351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.797379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.797571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.797728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.797755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.797929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.798095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.798123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.798255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.798418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.798447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.798591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.798765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.798793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.798973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.799107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.799139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.799364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.799559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.799587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.799804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.799962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.799990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.800157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.800324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.800351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.800553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.800698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.800726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.800901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.801058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.801086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.801248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.801432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.801461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.801645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.801820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.801847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.802032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.802214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.802241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.802428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.802589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.802616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.802832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.803002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.803034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.803237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.803370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.803403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.803561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.803744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.803772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.803969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.804149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.804177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.804363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.804569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.804606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.804735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.804903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.804930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.805137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.805349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.805388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.805589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.805794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.805831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.806034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.806256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.806294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.806491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.806647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.806685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.806903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.807062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.807095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.807295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.807465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.807494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.807619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.807805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.807832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.807989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.808172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.808199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.808385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.808553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.808580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.808740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.808892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.808919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.809107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.809270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.809297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.809471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.809630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.809657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.809822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.810009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.810036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.810227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.810385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.810422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.810579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.810710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.810737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.810906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.811035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.811062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.811252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.811444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.711 [2024-04-24 22:15:52.811473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.711 qpair failed and we were unable to recover it. 00:24:10.711 [2024-04-24 22:15:52.811637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.811784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.811812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.811998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.812188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.812215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.812365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.812532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.812559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.812692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.812858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.812885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.813046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.813200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.813227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.813418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.813607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.813635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.813795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.813953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.813980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.814168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.814359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.814386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.814587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.814753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.814780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.814942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.815101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.815128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.815314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.815503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.815531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.815686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.815832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.815859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.816046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.816181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.816209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.816370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.816565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.816593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.816777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.816963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.816990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.817111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.817267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.817294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.817422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.817568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.817595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.817753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.817905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.817932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.818097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.818270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.818297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.818483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.818639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.818666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.818834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.819023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.819051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.819213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.819408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.819437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.819600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.819751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.819778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.819962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.820153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.820180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.820302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.820438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.820467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.820631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.820801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.820828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.820968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.821128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.821155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.821340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.821528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.821556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.821757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.821923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.821950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.822109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.822302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.822330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.822494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.822682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.822709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.822916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.823073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.823100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.823257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.823383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.823418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.823585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.823736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.823763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.823934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.824093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.824120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.824307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.824467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.824495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.824627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.824798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.824825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.824954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.825121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.825148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.825345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.825483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.825512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.825668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.825791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.825818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.825948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.826103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.826131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.826322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.826486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.826515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.826642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.826797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.826824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.827009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.827160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.827187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.827376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.827555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.827583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.712 qpair failed and we were unable to recover it. 00:24:10.712 [2024-04-24 22:15:52.827737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.712 [2024-04-24 22:15:52.827893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.827920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.828108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.828264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.828291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.828488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.828675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.828702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.828869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.829053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.829080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.829236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.829418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.829446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.829641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.829880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.829907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.830139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.830295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.830322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.830455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.830619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.830647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.830803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.830966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.830993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.831149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.831340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.831367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.831521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.831680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.831707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.831867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.832053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.832080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.832270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.832443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.832472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.832673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.832841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.832868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.833005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.833192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.833218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.833412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.833547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.833575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.833711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.833897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.833925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.834108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.834241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.834268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.834408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.834610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.834638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.834798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.834955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.834982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.835114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.835238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.835266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.835506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.835663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.835690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.835876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.836034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.836061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.836230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.836409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.836437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.836598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.836736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.836763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.836904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.837061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.837088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.837262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.837428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.837456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.837611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.837774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.837802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.837968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.838125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.838152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.838275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.838424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.838453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.838593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.838725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.838752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.838922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.839083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.839110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.839236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.839425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.839453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.839604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.839750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.839778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.839936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.840919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.840954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.841100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.841265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.841293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.841482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.841622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.841650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.841811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.841997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.842024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.842206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.842356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.842383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.842550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.842708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.842735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.842869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.843006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.843033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.843190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.843316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.843342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.843508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.843639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.843666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.843824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.844017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.844044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.844208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.844381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.844417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.713 qpair failed and we were unable to recover it. 00:24:10.713 [2024-04-24 22:15:52.844578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.713 [2024-04-24 22:15:52.844738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.844765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.844889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.845052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.845079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.845231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.845384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.845421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.845563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.845693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.845721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.845908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.846045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.846073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.846231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.846360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.846387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.846543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.846683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.846710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.846897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.847060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.847087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.847253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.847378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.847427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.847556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.847689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.847716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.847850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.848042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.848069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.848257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.848418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.848446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.848586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.848715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.848741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.848905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.849054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.849081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.849240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.849427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.849456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.849592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.849775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.849802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.849987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.850145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.850172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.850298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.850438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.850466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.850614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.850779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.850807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.850965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.851125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.851152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.851280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.851422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.851450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.851590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.851777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.851804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.852001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.852139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.852166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.852349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.852516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.852545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.852720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.852851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.852878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.853069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.853260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.853287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.853486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.853645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.853672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.853824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.853981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.854008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.854164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.854349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.854381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.854550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.854686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.854713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.854873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.855037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.855064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.855228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.855387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.855422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.855610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.855741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.855768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.855899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.856135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.856162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.856317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.856458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.856486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.856672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.856805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.856832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.856989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.857162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.857189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.857347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.857489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.857517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.857652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.857797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.857829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.714 qpair failed and we were unable to recover it. 00:24:10.714 [2024-04-24 22:15:52.857988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.714 [2024-04-24 22:15:52.858144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.858171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.858333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.858479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.858508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.858664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.858825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.858852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.859016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.859170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.859198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.859357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.859527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.859555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.859745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.859902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.859929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.860096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.860254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.860281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.860435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.860598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.860625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.860766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.860929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.860955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.861090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.861241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.861273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.861460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.861588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.861616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.861776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.861933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.861960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.862110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.862237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.862264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.862406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.862581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.862608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.862768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.862954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.862981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.863171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.863326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.863353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.863526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.863673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.863701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.863831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.863992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.864019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.864142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.864300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.864327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.864519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.864678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.864717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.864843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.864970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.864996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.865153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.865284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.865312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.865482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.865617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.865645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.865804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.865968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.865995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.866151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.866289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.866315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.866476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.866637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.866664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.866819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.866974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.867002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.867153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.867312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.867340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.867511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.867638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.867665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.867815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.867949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.867977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.868170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.868317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.868344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.868508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.868654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.868681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.868843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.868971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.868998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.869182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.869366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.869410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.869570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.869731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.869758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.869927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.870093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.870121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.870254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.870452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.870480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.870633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.870807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.870835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.870994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.871151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.871179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.871353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.871523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.871551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.871715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.871902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.871930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.872092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.872246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.872273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.872438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.872599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.872626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.872815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.872946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.872973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.873132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.873318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.873344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.715 [2024-04-24 22:15:52.873513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.873676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.715 [2024-04-24 22:15:52.873704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.715 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.873863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.874043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.874070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.874219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.874350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.874377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.874554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.874712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.874739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.874895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.875055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.875082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.875276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.875435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.875463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.875596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.875783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.875811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.875942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.876097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.876125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.876278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.876431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.876460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.876589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.876749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.876776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.876926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.877048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.877074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.877228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.877387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.877422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.877583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.877717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.877744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.877893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.878068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.878095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.878230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.878360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.878387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.878548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.878706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.878733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.878892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.879055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.879082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.879239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.879405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.879434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.879587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.879773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.879800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.879992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.880144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.880171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.880331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.880488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.880517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.880667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.880825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.880853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.881030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.881219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.881247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.881411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.881569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.881596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.881755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.881886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.881912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.882104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.882237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.882264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.882384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.882548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.882575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.882739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.882866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.882892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.883029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.883186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.883213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.883371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.883517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.883545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.883707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.883861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.883887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.884071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.884227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.884254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.884442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.884582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.884609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.884769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.884929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.884956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.885094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.885246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.885273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.885438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.885565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.885592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.885779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.885905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.885932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.886120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.886283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.886310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.886488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.886676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.886703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.886829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.886992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.887019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.887195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.887358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.887385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.887582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.887740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.887767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.887899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.888082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.888109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.888268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.888428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.888456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.888614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.888762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.888790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.888980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.889143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.716 [2024-04-24 22:15:52.889170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.716 qpair failed and we were unable to recover it. 00:24:10.716 [2024-04-24 22:15:52.889354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.889519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.889546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.889703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.889842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.889869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.890005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.890163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.890190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.890349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.890539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.890567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.890727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.890855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.890882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.891038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.891192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.891219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.891349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.891512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.891540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.891731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.891888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.891915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.892063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.892240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.892267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.892436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.892598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.892625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.892782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.892971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.892998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.893127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.893288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.893315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.893507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.893645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.893672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.893806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.893992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.894019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.894148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.894334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.894362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.894527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.894713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.894740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.894901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.895077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.895104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.895234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.895424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.895453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.895588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.895774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.895801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.895930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.896120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.896147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.896299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.896463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.896491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.896618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.896786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.896813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.896975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.897133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.897160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.897349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.897514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.897542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.897697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.897826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.897853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.897871] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.717 [2024-04-24 22:15:52.897923] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.717 [2024-04-24 22:15:52.897954] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.717 [2024-04-24 22:15:52.897981] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.717 [2024-04-24 22:15:52.898005] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.717 [2024-04-24 22:15:52.898008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.898163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.898122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:24:10.717 [2024-04-24 22:15:52.898192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.898185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:24:10.717 [2024-04-24 22:15:52.898249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:10.717 [2024-04-24 22:15:52.898345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.898238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:24:10.717 [2024-04-24 22:15:52.898690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.898725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.898899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.899043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.899071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.899237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.899426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.899456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.899643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.899776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.899803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.899969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.900135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.900162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.900318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.900449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.900478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.900615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.900767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.900794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.900946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.901108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.901135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.901319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.901450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.901478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.901638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.901793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.901820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.901976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.902137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.902164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.902303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.902489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.717 [2024-04-24 22:15:52.902517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.717 qpair failed and we were unable to recover it. 00:24:10.717 [2024-04-24 22:15:52.902705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.902835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.902862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.903012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.903171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.903198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.903333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.903462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.903491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.903645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.903805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.903833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.903994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.904127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.904154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.904281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.904471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.904500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.904666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.904857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.904884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.905016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.905167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.905195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.905353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.905520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.905548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.905718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.905851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.905878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.906039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.906206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.906233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.906428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.906592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.906621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.906776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.906936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.906963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.907153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.907314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.907341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.907518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.907663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.907691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.907855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.908000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.908028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.908185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.908313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.908341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.908504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.908693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.908721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.908913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.909068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.909096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.909293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.909433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.909462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.909648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.909825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.909852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.910034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.910167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.910195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.910391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.910536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.910564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.910702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.910834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.910862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.911018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.911169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.911197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.911332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.911467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.911495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.911675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.911850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.911878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.912071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.912259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.912287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.912455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.912617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.912645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.912794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.912961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.912988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.913123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.913280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.913307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.913441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.913630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.913657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.913842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.914001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.914029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.914189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.914346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.914374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.914558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.914682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.914710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.914886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.915048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.915076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.915243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.915407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.915436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.915561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.915707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.915734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.915894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.916023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.916051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.916176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.916350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.916378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.916545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.916735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.916763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.916921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.917073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.917100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.917261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.917401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.917429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.917612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.917792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.917819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.917963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.918123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.918150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.718 qpair failed and we were unable to recover it. 00:24:10.718 [2024-04-24 22:15:52.918300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.918464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.718 [2024-04-24 22:15:52.918493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.719 qpair failed and we were unable to recover it. 00:24:10.719 [2024-04-24 22:15:52.918646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.719 [2024-04-24 22:15:52.918802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.719 [2024-04-24 22:15:52.918829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.719 qpair failed and we were unable to recover it. 00:24:10.719 [2024-04-24 22:15:52.919017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.719 [2024-04-24 22:15:52.919142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.719 [2024-04-24 22:15:52.919170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.719 qpair failed and we were unable to recover it. 00:24:10.719 [2024-04-24 22:15:52.919330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.719 [2024-04-24 22:15:52.919487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.719 [2024-04-24 22:15:52.919515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.719 qpair failed and we were unable to recover it. 00:24:10.719 [2024-04-24 22:15:52.919703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.719 [2024-04-24 22:15:52.919882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.719 [2024-04-24 22:15:52.919910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.719 qpair failed and we were unable to recover it. 00:24:10.719 [2024-04-24 22:15:52.920092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.719 [2024-04-24 22:15:52.920244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.719 [2024-04-24 22:15:52.920271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.719 qpair failed and we were unable to recover it. 00:24:10.719 [2024-04-24 22:15:52.920430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.719 [2024-04-24 22:15:52.920585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.719 [2024-04-24 22:15:52.920613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.719 qpair failed and we were unable to recover it. 00:24:10.719 [2024-04-24 22:15:52.920773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.719 [2024-04-24 22:15:52.920959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.719 [2024-04-24 22:15:52.920986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.719 qpair failed and we were unable to recover it. 00:24:10.719 [2024-04-24 22:15:52.921144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.719 [2024-04-24 22:15:52.921306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.719 [2024-04-24 22:15:52.921333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.719 qpair failed and we were unable to recover it. 00:24:10.719 [2024-04-24 22:15:52.921497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.719 [2024-04-24 22:15:52.921656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.719 [2024-04-24 22:15:52.921683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.719 qpair failed and we were unable to recover it. 00:24:10.719 [2024-04-24 22:15:52.921816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.719 [2024-04-24 22:15:52.921972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.719 [2024-04-24 22:15:52.921999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.719 qpair failed and we were unable to recover it. 00:24:10.719 [2024-04-24 22:15:52.922191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.719 [2024-04-24 22:15:52.922347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.719 [2024-04-24 22:15:52.922375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.719 qpair failed and we were unable to recover it. 00:24:10.719 [2024-04-24 22:15:52.922537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.719 [2024-04-24 22:15:52.922713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.719 [2024-04-24 22:15:52.922740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.719 qpair failed and we were unable to recover it. 00:24:10.719 [2024-04-24 22:15:52.922910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.719 [2024-04-24 22:15:52.923094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.989 [2024-04-24 22:15:52.923121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.989 qpair failed and we were unable to recover it. 00:24:10.989 [2024-04-24 22:15:52.923280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.989 [2024-04-24 22:15:52.923471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.989 [2024-04-24 22:15:52.923499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.989 qpair failed and we were unable to recover it. 00:24:10.989 [2024-04-24 22:15:52.923648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.989 [2024-04-24 22:15:52.923820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.989 [2024-04-24 22:15:52.923847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.989 qpair failed and we were unable to recover it. 00:24:10.989 [2024-04-24 22:15:52.924011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.989 [2024-04-24 22:15:52.924164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.989 [2024-04-24 22:15:52.924191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.989 qpair failed and we were unable to recover it. 00:24:10.989 [2024-04-24 22:15:52.924352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.989 [2024-04-24 22:15:52.924492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.989 [2024-04-24 22:15:52.924521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.989 qpair failed and we were unable to recover it. 00:24:10.989 [2024-04-24 22:15:52.924684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.989 [2024-04-24 22:15:52.924802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.989 [2024-04-24 22:15:52.924830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.989 qpair failed and we were unable to recover it. 00:24:10.989 [2024-04-24 22:15:52.924986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.989 [2024-04-24 22:15:52.925136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.989 [2024-04-24 22:15:52.925163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.989 qpair failed and we were unable to recover it. 00:24:10.989 [2024-04-24 22:15:52.925351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.989 [2024-04-24 22:15:52.925483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.989 [2024-04-24 22:15:52.925511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.989 qpair failed and we were unable to recover it. 00:24:10.989 [2024-04-24 22:15:52.925696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.989 [2024-04-24 22:15:52.925884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.925912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.990 qpair failed and we were unable to recover it. 00:24:10.990 [2024-04-24 22:15:52.926080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.926268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.926295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.990 qpair failed and we were unable to recover it. 00:24:10.990 [2024-04-24 22:15:52.926455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.926596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.926623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.990 qpair failed and we were unable to recover it. 00:24:10.990 [2024-04-24 22:15:52.926777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.926939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.926967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.990 qpair failed and we were unable to recover it. 00:24:10.990 [2024-04-24 22:15:52.927092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.927278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.927305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.990 qpair failed and we were unable to recover it. 00:24:10.990 [2024-04-24 22:15:52.927461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.927622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.927649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.990 qpair failed and we were unable to recover it. 00:24:10.990 [2024-04-24 22:15:52.927812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.927963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.927990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.990 qpair failed and we were unable to recover it. 00:24:10.990 [2024-04-24 22:15:52.928130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.928316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.928343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.990 qpair failed and we were unable to recover it. 00:24:10.990 [2024-04-24 22:15:52.928509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.928674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.928701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.990 qpair failed and we were unable to recover it. 00:24:10.990 [2024-04-24 22:15:52.928849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.929011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.929038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.990 qpair failed and we were unable to recover it. 00:24:10.990 [2024-04-24 22:15:52.929175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.929324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.929351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.990 qpair failed and we were unable to recover it. 00:24:10.990 [2024-04-24 22:15:52.929545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.929704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.929731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.990 qpair failed and we were unable to recover it. 00:24:10.990 [2024-04-24 22:15:52.929890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.930048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.930075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.990 qpair failed and we were unable to recover it. 00:24:10.990 [2024-04-24 22:15:52.930238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.930365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.930438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.990 qpair failed and we were unable to recover it. 00:24:10.990 [2024-04-24 22:15:52.930636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.930759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.930786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.990 qpair failed and we were unable to recover it. 00:24:10.990 [2024-04-24 22:15:52.930950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.931106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.931133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.990 qpair failed and we were unable to recover it. 00:24:10.990 [2024-04-24 22:15:52.931323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.931447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.931502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.990 qpair failed and we were unable to recover it. 00:24:10.990 [2024-04-24 22:15:52.931671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.931833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.931861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.990 qpair failed and we were unable to recover it. 00:24:10.990 [2024-04-24 22:15:52.931991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.932140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.932167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.990 qpair failed and we were unable to recover it. 00:24:10.990 [2024-04-24 22:15:52.932337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.932531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.932559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.990 qpair failed and we were unable to recover it. 00:24:10.990 [2024-04-24 22:15:52.932690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.932853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.932880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.990 qpair failed and we were unable to recover it. 00:24:10.990 [2024-04-24 22:15:52.933066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.933201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.933229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.990 qpair failed and we were unable to recover it. 00:24:10.990 [2024-04-24 22:15:52.933388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.933549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.933576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.990 qpair failed and we were unable to recover it. 00:24:10.990 [2024-04-24 22:15:52.933741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.933926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.933957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.990 qpair failed and we were unable to recover it. 00:24:10.990 [2024-04-24 22:15:52.934119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.934304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.934332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.990 qpair failed and we were unable to recover it. 00:24:10.990 [2024-04-24 22:15:52.934524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.934685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.934713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.990 qpair failed and we were unable to recover it. 00:24:10.990 [2024-04-24 22:15:52.934844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.935006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.935033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.990 qpair failed and we were unable to recover it. 00:24:10.990 [2024-04-24 22:15:52.935222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.935410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.935439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.990 qpair failed and we were unable to recover it. 00:24:10.990 [2024-04-24 22:15:52.935632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.990 [2024-04-24 22:15:52.935823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.935851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.991 qpair failed and we were unable to recover it. 00:24:10.991 [2024-04-24 22:15:52.935998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.936150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.936177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.991 qpair failed and we were unable to recover it. 00:24:10.991 [2024-04-24 22:15:52.936344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.936528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.936556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.991 qpair failed and we were unable to recover it. 00:24:10.991 [2024-04-24 22:15:52.936708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.936871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.936898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.991 qpair failed and we were unable to recover it. 00:24:10.991 [2024-04-24 22:15:52.937095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.937279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.937306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.991 qpair failed and we were unable to recover it. 00:24:10.991 [2024-04-24 22:15:52.937475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.937634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.937667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.991 qpair failed and we were unable to recover it. 00:24:10.991 [2024-04-24 22:15:52.937830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.938014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.938041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.991 qpair failed and we were unable to recover it. 00:24:10.991 [2024-04-24 22:15:52.938237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.938406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.938433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.991 qpair failed and we were unable to recover it. 00:24:10.991 [2024-04-24 22:15:52.938631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.938789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.938816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.991 qpair failed and we were unable to recover it. 00:24:10.991 [2024-04-24 22:15:52.939006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.939169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.939197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.991 qpair failed and we were unable to recover it. 00:24:10.991 [2024-04-24 22:15:52.939384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.939544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.939572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.991 qpair failed and we were unable to recover it. 00:24:10.991 [2024-04-24 22:15:52.939756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.939942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.939970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.991 qpair failed and we were unable to recover it. 00:24:10.991 [2024-04-24 22:15:52.940103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.940268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.940295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.991 qpair failed and we were unable to recover it. 00:24:10.991 [2024-04-24 22:15:52.940480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.940664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.940691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.991 qpair failed and we were unable to recover it. 00:24:10.991 [2024-04-24 22:15:52.940879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.941031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.941059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.991 qpair failed and we were unable to recover it. 00:24:10.991 [2024-04-24 22:15:52.941248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.941408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.941441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.991 qpair failed and we were unable to recover it. 00:24:10.991 [2024-04-24 22:15:52.941596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.941725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.941752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.991 qpair failed and we were unable to recover it. 00:24:10.991 [2024-04-24 22:15:52.941936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.942121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.942148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.991 qpair failed and we were unable to recover it. 00:24:10.991 [2024-04-24 22:15:52.942304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.942499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.942528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.991 qpair failed and we were unable to recover it. 00:24:10.991 [2024-04-24 22:15:52.942727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.942888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.942916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.991 qpair failed and we were unable to recover it. 00:24:10.991 [2024-04-24 22:15:52.943070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.943193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.943221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.991 qpair failed and we were unable to recover it. 00:24:10.991 [2024-04-24 22:15:52.943408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.943592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.943620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.991 qpair failed and we were unable to recover it. 00:24:10.991 [2024-04-24 22:15:52.943777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.943932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.943960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.991 qpair failed and we were unable to recover it. 00:24:10.991 [2024-04-24 22:15:52.944148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.944310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.944337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.991 qpair failed and we were unable to recover it. 00:24:10.991 [2024-04-24 22:15:52.944523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.944696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.944723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.991 qpair failed and we were unable to recover it. 00:24:10.991 [2024-04-24 22:15:52.944887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.945081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.945109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.991 qpair failed and we were unable to recover it. 00:24:10.991 [2024-04-24 22:15:52.945301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.945496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.945525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.991 qpair failed and we were unable to recover it. 00:24:10.991 [2024-04-24 22:15:52.945680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.945840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.945867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.991 qpair failed and we were unable to recover it. 00:24:10.991 [2024-04-24 22:15:52.946061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.946200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.991 [2024-04-24 22:15:52.946227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.991 qpair failed and we were unable to recover it. 00:24:10.991 [2024-04-24 22:15:52.946412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.946601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.946629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.992 qpair failed and we were unable to recover it. 00:24:10.992 [2024-04-24 22:15:52.946754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.946913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.946941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.992 qpair failed and we were unable to recover it. 00:24:10.992 [2024-04-24 22:15:52.947107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.947268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.947295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.992 qpair failed and we were unable to recover it. 00:24:10.992 [2024-04-24 22:15:52.947450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.947633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.947660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.992 qpair failed and we were unable to recover it. 00:24:10.992 [2024-04-24 22:15:52.947791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.947946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.947973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.992 qpair failed and we were unable to recover it. 00:24:10.992 [2024-04-24 22:15:52.948126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.948288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.948315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.992 qpair failed and we were unable to recover it. 00:24:10.992 [2024-04-24 22:15:52.948474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.948669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.948696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.992 qpair failed and we were unable to recover it. 00:24:10.992 [2024-04-24 22:15:52.948901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.949067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.949094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.992 qpair failed and we were unable to recover it. 00:24:10.992 [2024-04-24 22:15:52.949255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.949407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.949435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.992 qpair failed and we were unable to recover it. 00:24:10.992 [2024-04-24 22:15:52.949588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.949781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.949808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.992 qpair failed and we were unable to recover it. 00:24:10.992 [2024-04-24 22:15:52.950002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.950127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.950155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.992 qpair failed and we were unable to recover it. 00:24:10.992 [2024-04-24 22:15:52.950342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.950465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.950494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.992 qpair failed and we were unable to recover it. 00:24:10.992 [2024-04-24 22:15:52.950648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.950836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.950863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.992 qpair failed and we were unable to recover it. 00:24:10.992 [2024-04-24 22:15:52.951022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.951182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.951209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.992 qpair failed and we were unable to recover it. 00:24:10.992 [2024-04-24 22:15:52.951410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.951572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.951599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.992 qpair failed and we were unable to recover it. 00:24:10.992 [2024-04-24 22:15:52.951722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.951881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.951908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.992 qpair failed and we were unable to recover it. 00:24:10.992 [2024-04-24 22:15:52.952063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.952213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.952240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.992 qpair failed and we were unable to recover it. 00:24:10.992 [2024-04-24 22:15:52.952436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.952621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.952648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.992 qpair failed and we were unable to recover it. 00:24:10.992 [2024-04-24 22:15:52.952784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.952940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.952967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.992 qpair failed and we were unable to recover it. 00:24:10.992 [2024-04-24 22:15:52.953091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.953245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.953272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.992 qpair failed and we were unable to recover it. 00:24:10.992 [2024-04-24 22:15:52.953432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.953591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.953618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.992 qpair failed and we were unable to recover it. 00:24:10.992 [2024-04-24 22:15:52.953803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.953989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.954016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.992 qpair failed and we were unable to recover it. 00:24:10.992 [2024-04-24 22:15:52.954205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.954391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.954435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.992 qpair failed and we were unable to recover it. 00:24:10.992 [2024-04-24 22:15:52.954628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.954786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.954814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.992 qpair failed and we were unable to recover it. 00:24:10.992 [2024-04-24 22:15:52.954965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.955125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.955152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.992 qpair failed and we were unable to recover it. 00:24:10.992 [2024-04-24 22:15:52.955277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.955437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.955465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.992 qpair failed and we were unable to recover it. 00:24:10.992 [2024-04-24 22:15:52.955606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.955793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.955820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.992 qpair failed and we were unable to recover it. 00:24:10.992 [2024-04-24 22:15:52.955989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.956175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.956202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.992 qpair failed and we were unable to recover it. 00:24:10.992 [2024-04-24 22:15:52.956359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.956553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.992 [2024-04-24 22:15:52.956581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.992 qpair failed and we were unable to recover it. 00:24:10.993 [2024-04-24 22:15:52.956779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.956933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.956960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.993 qpair failed and we were unable to recover it. 00:24:10.993 [2024-04-24 22:15:52.957148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.957281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.957309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.993 qpair failed and we were unable to recover it. 00:24:10.993 [2024-04-24 22:15:52.957501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.957637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.957664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.993 qpair failed and we were unable to recover it. 00:24:10.993 [2024-04-24 22:15:52.957830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.957989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.958016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.993 qpair failed and we were unable to recover it. 00:24:10.993 [2024-04-24 22:15:52.958201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.958354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.958381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.993 qpair failed and we were unable to recover it. 00:24:10.993 [2024-04-24 22:15:52.958562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.958724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.958752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.993 qpair failed and we were unable to recover it. 00:24:10.993 [2024-04-24 22:15:52.958933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.959094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.959122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.993 qpair failed and we were unable to recover it. 00:24:10.993 [2024-04-24 22:15:52.959295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.959481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.959510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.993 qpair failed and we were unable to recover it. 00:24:10.993 [2024-04-24 22:15:52.959678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.959873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.959901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.993 qpair failed and we were unable to recover it. 00:24:10.993 [2024-04-24 22:15:52.960068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.960254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.960281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.993 qpair failed and we were unable to recover it. 00:24:10.993 [2024-04-24 22:15:52.960445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.960605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.960633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.993 qpair failed and we were unable to recover it. 00:24:10.993 [2024-04-24 22:15:52.960800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.960998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.961025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.993 qpair failed and we were unable to recover it. 00:24:10.993 [2024-04-24 22:15:52.961181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.961334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.961361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.993 qpair failed and we were unable to recover it. 00:24:10.993 [2024-04-24 22:15:52.961534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.961690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.961716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.993 qpair failed and we were unable to recover it. 00:24:10.993 [2024-04-24 22:15:52.961875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.962063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.962090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.993 qpair failed and we were unable to recover it. 00:24:10.993 [2024-04-24 22:15:52.962245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.962431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.962460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.993 qpair failed and we were unable to recover it. 00:24:10.993 [2024-04-24 22:15:52.962652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.962836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.962863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.993 qpair failed and we were unable to recover it. 00:24:10.993 [2024-04-24 22:15:52.963061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.963219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.963246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.993 qpair failed and we were unable to recover it. 00:24:10.993 [2024-04-24 22:15:52.963416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.963579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.963606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.993 qpair failed and we were unable to recover it. 00:24:10.993 [2024-04-24 22:15:52.963779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.963934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.963961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.993 qpair failed and we were unable to recover it. 00:24:10.993 [2024-04-24 22:15:52.964146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.964293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.964320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.993 qpair failed and we were unable to recover it. 00:24:10.993 [2024-04-24 22:15:52.964506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.964629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.964656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.993 qpair failed and we were unable to recover it. 00:24:10.993 [2024-04-24 22:15:52.964814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.964999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.965026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.993 qpair failed and we were unable to recover it. 00:24:10.993 [2024-04-24 22:15:52.965212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.965369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.965402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.993 qpair failed and we were unable to recover it. 00:24:10.993 [2024-04-24 22:15:52.965589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.965772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.965799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.993 qpair failed and we were unable to recover it. 00:24:10.993 [2024-04-24 22:15:52.965956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.966116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.966143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.993 qpair failed and we were unable to recover it. 00:24:10.993 [2024-04-24 22:15:52.966329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.966512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.966541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.993 qpair failed and we were unable to recover it. 00:24:10.993 [2024-04-24 22:15:52.966704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.966841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.966868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.993 qpair failed and we were unable to recover it. 00:24:10.993 [2024-04-24 22:15:52.967028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.993 [2024-04-24 22:15:52.967219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.967246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.994 qpair failed and we were unable to recover it. 00:24:10.994 [2024-04-24 22:15:52.967409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.967561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.967589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.994 qpair failed and we were unable to recover it. 00:24:10.994 [2024-04-24 22:15:52.967715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.967910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.967937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.994 qpair failed and we were unable to recover it. 00:24:10.994 [2024-04-24 22:15:52.968128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.968316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.968343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.994 qpair failed and we were unable to recover it. 00:24:10.994 [2024-04-24 22:15:52.968506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.968669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.968697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.994 qpair failed and we were unable to recover it. 00:24:10.994 [2024-04-24 22:15:52.968885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.969010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.969037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.994 qpair failed and we were unable to recover it. 00:24:10.994 [2024-04-24 22:15:52.969201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.969390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.969426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.994 qpair failed and we were unable to recover it. 00:24:10.994 [2024-04-24 22:15:52.969565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.969739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.969767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.994 qpair failed and we were unable to recover it. 00:24:10.994 [2024-04-24 22:15:52.969915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.970104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.970131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.994 qpair failed and we were unable to recover it. 00:24:10.994 [2024-04-24 22:15:52.970297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.970460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.970489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.994 qpair failed and we were unable to recover it. 00:24:10.994 [2024-04-24 22:15:52.970631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.970791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.970818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.994 qpair failed and we were unable to recover it. 00:24:10.994 [2024-04-24 22:15:52.971015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.971140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.971167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.994 qpair failed and we were unable to recover it. 00:24:10.994 [2024-04-24 22:15:52.971337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.971493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.971522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.994 qpair failed and we were unable to recover it. 00:24:10.994 [2024-04-24 22:15:52.971714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.971872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.971900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.994 qpair failed and we were unable to recover it. 00:24:10.994 [2024-04-24 22:15:52.972085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.972247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.972274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.994 qpair failed and we were unable to recover it. 00:24:10.994 [2024-04-24 22:15:52.972433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.972589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.972616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.994 qpair failed and we were unable to recover it. 00:24:10.994 [2024-04-24 22:15:52.972815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.972974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.973002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.994 qpair failed and we were unable to recover it. 00:24:10.994 [2024-04-24 22:15:52.973161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.973326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.973353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.994 qpair failed and we were unable to recover it. 00:24:10.994 [2024-04-24 22:15:52.973487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.973681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.973708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.994 qpair failed and we were unable to recover it. 00:24:10.994 [2024-04-24 22:15:52.973856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.974043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.974070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.994 qpair failed and we were unable to recover it. 00:24:10.994 [2024-04-24 22:15:52.974261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.974418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.974447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.994 qpair failed and we were unable to recover it. 00:24:10.994 [2024-04-24 22:15:52.974634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.974798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.974825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.994 qpair failed and we were unable to recover it. 00:24:10.994 [2024-04-24 22:15:52.974984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.975125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.975152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.994 qpair failed and we were unable to recover it. 00:24:10.994 [2024-04-24 22:15:52.975338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.975524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.994 [2024-04-24 22:15:52.975553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.995 qpair failed and we were unable to recover it. 00:24:10.995 [2024-04-24 22:15:52.975685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.975805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.975832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.995 qpair failed and we were unable to recover it. 00:24:10.995 [2024-04-24 22:15:52.976020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.976150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.976178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.995 qpair failed and we were unable to recover it. 00:24:10.995 [2024-04-24 22:15:52.976308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.976468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.976497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.995 qpair failed and we were unable to recover it. 00:24:10.995 [2024-04-24 22:15:52.976648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.976772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.976800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.995 qpair failed and we were unable to recover it. 00:24:10.995 [2024-04-24 22:15:52.976986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.977142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.977169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.995 qpair failed and we were unable to recover it. 00:24:10.995 [2024-04-24 22:15:52.977333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.977490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.977518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.995 qpair failed and we were unable to recover it. 00:24:10.995 [2024-04-24 22:15:52.977645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.977801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.977828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.995 qpair failed and we were unable to recover it. 00:24:10.995 [2024-04-24 22:15:52.978015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.978143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.978170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.995 qpair failed and we were unable to recover it. 00:24:10.995 [2024-04-24 22:15:52.978303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.978501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.978530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.995 qpair failed and we were unable to recover it. 00:24:10.995 [2024-04-24 22:15:52.978721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.978907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.978934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.995 qpair failed and we were unable to recover it. 00:24:10.995 [2024-04-24 22:15:52.979093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.979278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.979306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.995 qpair failed and we were unable to recover it. 00:24:10.995 [2024-04-24 22:15:52.979459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.979619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.979646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.995 qpair failed and we were unable to recover it. 00:24:10.995 [2024-04-24 22:15:52.979834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.980019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.980045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.995 qpair failed and we were unable to recover it. 00:24:10.995 [2024-04-24 22:15:52.980241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.980403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.980431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.995 qpair failed and we were unable to recover it. 00:24:10.995 [2024-04-24 22:15:52.980619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.980795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.980822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.995 qpair failed and we were unable to recover it. 00:24:10.995 [2024-04-24 22:15:52.980983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.981172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.981199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.995 qpair failed and we were unable to recover it. 00:24:10.995 [2024-04-24 22:15:52.981404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.981572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.981599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.995 qpair failed and we were unable to recover it. 00:24:10.995 [2024-04-24 22:15:52.981730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.981915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.981942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.995 qpair failed and we were unable to recover it. 00:24:10.995 [2024-04-24 22:15:52.982134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.982289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.982317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.995 qpair failed and we were unable to recover it. 00:24:10.995 [2024-04-24 22:15:52.982452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.982639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.982666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.995 qpair failed and we were unable to recover it. 00:24:10.995 [2024-04-24 22:15:52.982867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.983053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.983080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.995 qpair failed and we were unable to recover it. 00:24:10.995 [2024-04-24 22:15:52.983242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.983408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.983437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.995 qpair failed and we were unable to recover it. 00:24:10.995 [2024-04-24 22:15:52.983633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.983784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.983811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.995 qpair failed and we were unable to recover it. 00:24:10.995 [2024-04-24 22:15:52.983999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.984160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.984187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.995 qpair failed and we were unable to recover it. 00:24:10.995 [2024-04-24 22:15:52.984371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.984538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.984565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.995 qpair failed and we were unable to recover it. 00:24:10.995 [2024-04-24 22:15:52.984719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.984881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.984908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.995 qpair failed and we were unable to recover it. 00:24:10.995 [2024-04-24 22:15:52.985104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.985298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.985325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.995 qpair failed and we were unable to recover it. 00:24:10.995 [2024-04-24 22:15:52.985519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.985707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.985734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.995 qpair failed and we were unable to recover it. 00:24:10.995 [2024-04-24 22:15:52.985930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.995 [2024-04-24 22:15:52.986087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.986114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.996 qpair failed and we were unable to recover it. 00:24:10.996 [2024-04-24 22:15:52.986275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.986462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.986491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.996 qpair failed and we were unable to recover it. 00:24:10.996 [2024-04-24 22:15:52.986686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.986843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.986871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.996 qpair failed and we were unable to recover it. 00:24:10.996 [2024-04-24 22:15:52.987031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.987154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.987182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.996 qpair failed and we were unable to recover it. 00:24:10.996 [2024-04-24 22:15:52.987368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.987500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.987528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.996 qpair failed and we were unable to recover it. 00:24:10.996 [2024-04-24 22:15:52.987720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.987906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.987933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.996 qpair failed and we were unable to recover it. 00:24:10.996 [2024-04-24 22:15:52.988071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.988255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.988282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.996 qpair failed and we were unable to recover it. 00:24:10.996 [2024-04-24 22:15:52.988418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.988570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.988598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.996 qpair failed and we were unable to recover it. 00:24:10.996 [2024-04-24 22:15:52.988786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.988970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.989002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.996 qpair failed and we were unable to recover it. 00:24:10.996 [2024-04-24 22:15:52.989162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.989348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.989376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.996 qpair failed and we were unable to recover it. 00:24:10.996 [2024-04-24 22:15:52.989514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.989676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.989703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.996 qpair failed and we were unable to recover it. 00:24:10.996 [2024-04-24 22:15:52.989886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.990069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.990096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.996 qpair failed and we were unable to recover it. 00:24:10.996 [2024-04-24 22:15:52.990285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.990446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.990474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.996 qpair failed and we were unable to recover it. 00:24:10.996 [2024-04-24 22:15:52.990661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.990817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.990844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.996 qpair failed and we were unable to recover it. 00:24:10.996 [2024-04-24 22:15:52.991004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.991137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.991164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.996 qpair failed and we were unable to recover it. 00:24:10.996 [2024-04-24 22:15:52.991351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.991546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.991574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.996 qpair failed and we were unable to recover it. 00:24:10.996 [2024-04-24 22:15:52.991750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.991934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.991961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.996 qpair failed and we were unable to recover it. 00:24:10.996 [2024-04-24 22:15:52.992148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.992301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.992328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.996 qpair failed and we were unable to recover it. 00:24:10.996 [2024-04-24 22:15:52.992478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.992642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.992674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.996 qpair failed and we were unable to recover it. 00:24:10.996 [2024-04-24 22:15:52.992864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.993032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.993060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.996 qpair failed and we were unable to recover it. 00:24:10.996 [2024-04-24 22:15:52.993247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.993407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.993435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.996 qpair failed and we were unable to recover it. 00:24:10.996 [2024-04-24 22:15:52.993566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.993751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.993778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.996 qpair failed and we were unable to recover it. 00:24:10.996 [2024-04-24 22:15:52.993962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.994118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.994144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.996 qpair failed and we were unable to recover it. 00:24:10.996 [2024-04-24 22:15:52.994278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.994465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.994493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.996 qpair failed and we were unable to recover it. 00:24:10.996 [2024-04-24 22:15:52.994655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.994818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.994845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.996 qpair failed and we were unable to recover it. 00:24:10.996 [2024-04-24 22:15:52.994993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.995111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.995138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.996 qpair failed and we were unable to recover it. 00:24:10.996 [2024-04-24 22:15:52.995330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.995519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.995547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.996 qpair failed and we were unable to recover it. 00:24:10.996 [2024-04-24 22:15:52.995737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.995889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.995917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.996 qpair failed and we were unable to recover it. 00:24:10.996 [2024-04-24 22:15:52.996074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.996267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.996299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.996 qpair failed and we were unable to recover it. 00:24:10.996 [2024-04-24 22:15:52.996496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.996 [2024-04-24 22:15:52.996658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:52.996685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.997 qpair failed and we were unable to recover it. 00:24:10.997 [2024-04-24 22:15:52.996874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:52.997057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:52.997084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.997 qpair failed and we were unable to recover it. 00:24:10.997 [2024-04-24 22:15:52.997255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:52.997408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:52.997436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.997 qpair failed and we were unable to recover it. 00:24:10.997 [2024-04-24 22:15:52.997597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:52.997782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:52.997809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.997 qpair failed and we were unable to recover it. 00:24:10.997 [2024-04-24 22:15:52.997945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:52.998075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:52.998102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.997 qpair failed and we were unable to recover it. 00:24:10.997 [2024-04-24 22:15:52.998262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:52.998405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:52.998433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.997 qpair failed and we were unable to recover it. 00:24:10.997 [2024-04-24 22:15:52.998616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:52.998776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:52.998803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.997 qpair failed and we were unable to recover it. 00:24:10.997 [2024-04-24 22:15:52.998958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:52.999115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:52.999142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.997 qpair failed and we were unable to recover it. 00:24:10.997 [2024-04-24 22:15:52.999298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:52.999489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:52.999518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.997 qpair failed and we were unable to recover it. 00:24:10.997 [2024-04-24 22:15:52.999706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:52.999863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:52.999895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.997 qpair failed and we were unable to recover it. 00:24:10.997 [2024-04-24 22:15:53.000023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.000184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.000212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.997 qpair failed and we were unable to recover it. 00:24:10.997 [2024-04-24 22:15:53.000405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.000564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.000592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.997 qpair failed and we were unable to recover it. 00:24:10.997 [2024-04-24 22:15:53.000747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.000932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.000959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.997 qpair failed and we were unable to recover it. 00:24:10.997 [2024-04-24 22:15:53.001146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.001329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.001357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.997 qpair failed and we were unable to recover it. 00:24:10.997 [2024-04-24 22:15:53.001555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.001714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.001742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.997 qpair failed and we were unable to recover it. 00:24:10.997 [2024-04-24 22:15:53.001866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.002021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.002048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.997 qpair failed and we were unable to recover it. 00:24:10.997 [2024-04-24 22:15:53.002239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.002404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.002432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.997 qpair failed and we were unable to recover it. 00:24:10.997 [2024-04-24 22:15:53.002567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.002693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.002720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.997 qpair failed and we were unable to recover it. 00:24:10.997 [2024-04-24 22:15:53.002917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.003054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.003081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.997 qpair failed and we were unable to recover it. 00:24:10.997 [2024-04-24 22:15:53.003212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.003370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.003405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.997 qpair failed and we were unable to recover it. 00:24:10.997 [2024-04-24 22:15:53.003580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.003746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.003773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.997 qpair failed and we were unable to recover it. 00:24:10.997 [2024-04-24 22:15:53.003926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.004081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.004108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.997 qpair failed and we were unable to recover it. 00:24:10.997 [2024-04-24 22:15:53.004295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.004478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.004507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.997 qpair failed and we were unable to recover it. 00:24:10.997 [2024-04-24 22:15:53.004694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.004851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.004879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.997 qpair failed and we were unable to recover it. 00:24:10.997 [2024-04-24 22:15:53.005049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.005203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.005230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.997 qpair failed and we were unable to recover it. 00:24:10.997 [2024-04-24 22:15:53.005355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.005551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.005579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.997 qpair failed and we were unable to recover it. 00:24:10.997 [2024-04-24 22:15:53.005747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.005900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.005928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.997 qpair failed and we were unable to recover it. 00:24:10.997 [2024-04-24 22:15:53.006058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.006248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.006275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.997 qpair failed and we were unable to recover it. 00:24:10.997 [2024-04-24 22:15:53.006439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.006625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.997 [2024-04-24 22:15:53.006652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.997 qpair failed and we were unable to recover it. 00:24:10.997 [2024-04-24 22:15:53.006803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.006996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.007023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.998 qpair failed and we were unable to recover it. 00:24:10.998 [2024-04-24 22:15:53.007194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.007332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.007359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.998 qpair failed and we were unable to recover it. 00:24:10.998 [2024-04-24 22:15:53.007498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.007653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.007680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.998 qpair failed and we were unable to recover it. 00:24:10.998 [2024-04-24 22:15:53.007873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.008066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.008093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.998 qpair failed and we were unable to recover it. 00:24:10.998 [2024-04-24 22:15:53.008256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.008440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.008468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.998 qpair failed and we were unable to recover it. 00:24:10.998 [2024-04-24 22:15:53.008664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.008826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.008853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.998 qpair failed and we were unable to recover it. 00:24:10.998 [2024-04-24 22:15:53.009008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.009161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.009188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.998 qpair failed and we were unable to recover it. 00:24:10.998 [2024-04-24 22:15:53.009345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.009511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.009539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.998 qpair failed and we were unable to recover it. 00:24:10.998 [2024-04-24 22:15:53.009729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.009865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.009892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.998 qpair failed and we were unable to recover it. 00:24:10.998 [2024-04-24 22:15:53.010079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.010263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.010290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.998 qpair failed and we were unable to recover it. 00:24:10.998 [2024-04-24 22:15:53.010446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.010609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.010636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.998 qpair failed and we were unable to recover it. 00:24:10.998 [2024-04-24 22:15:53.010830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.011012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.011039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.998 qpair failed and we were unable to recover it. 00:24:10.998 [2024-04-24 22:15:53.011228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.011416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.011444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.998 qpair failed and we were unable to recover it. 00:24:10.998 [2024-04-24 22:15:53.011624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.011807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.011834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.998 qpair failed and we were unable to recover it. 00:24:10.998 [2024-04-24 22:15:53.012019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.012207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.012235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.998 qpair failed and we were unable to recover it. 00:24:10.998 [2024-04-24 22:15:53.012426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.012587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.012614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.998 qpair failed and we were unable to recover it. 00:24:10.998 [2024-04-24 22:15:53.012803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.012989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.013016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.998 qpair failed and we were unable to recover it. 00:24:10.998 [2024-04-24 22:15:53.013180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.013339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.013366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.998 qpair failed and we were unable to recover it. 00:24:10.998 [2024-04-24 22:15:53.013534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.013671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.013698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.998 qpair failed and we were unable to recover it. 00:24:10.998 [2024-04-24 22:15:53.013859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.014017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.014044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.998 qpair failed and we were unable to recover it. 00:24:10.998 [2024-04-24 22:15:53.014231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.014419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.014447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.998 qpair failed and we were unable to recover it. 00:24:10.998 [2024-04-24 22:15:53.014651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.014838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.014865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.998 qpair failed and we were unable to recover it. 00:24:10.998 [2024-04-24 22:15:53.015023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.015206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.015233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.998 qpair failed and we were unable to recover it. 00:24:10.998 [2024-04-24 22:15:53.015385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.015554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.015582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.998 qpair failed and we were unable to recover it. 00:24:10.998 [2024-04-24 22:15:53.015741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.998 [2024-04-24 22:15:53.015927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.015954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.999 qpair failed and we were unable to recover it. 00:24:10.999 [2024-04-24 22:15:53.016116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.016302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.016329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.999 qpair failed and we were unable to recover it. 00:24:10.999 [2024-04-24 22:15:53.016511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.016661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.016689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.999 qpair failed and we were unable to recover it. 00:24:10.999 [2024-04-24 22:15:53.016876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.017034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.017061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.999 qpair failed and we were unable to recover it. 00:24:10.999 [2024-04-24 22:15:53.017246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.017408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.017436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.999 qpair failed and we were unable to recover it. 00:24:10.999 [2024-04-24 22:15:53.017600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.017794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.017821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.999 qpair failed and we were unable to recover it. 00:24:10.999 [2024-04-24 22:15:53.018011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.018196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.018222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.999 qpair failed and we were unable to recover it. 00:24:10.999 [2024-04-24 22:15:53.018386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.018556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.018583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.999 qpair failed and we were unable to recover it. 00:24:10.999 [2024-04-24 22:15:53.018747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.018906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.018934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.999 qpair failed and we were unable to recover it. 00:24:10.999 [2024-04-24 22:15:53.019120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.019246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.019273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.999 qpair failed and we were unable to recover it. 00:24:10.999 [2024-04-24 22:15:53.019460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.019655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.019682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.999 qpair failed and we were unable to recover it. 00:24:10.999 [2024-04-24 22:15:53.019818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.019953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.019980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.999 qpair failed and we were unable to recover it. 00:24:10.999 [2024-04-24 22:15:53.020136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.020318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.020346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.999 qpair failed and we were unable to recover it. 00:24:10.999 [2024-04-24 22:15:53.020543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.020695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.020723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.999 qpair failed and we were unable to recover it. 00:24:10.999 [2024-04-24 22:15:53.020912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.021101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.021127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.999 qpair failed and we were unable to recover it. 00:24:10.999 [2024-04-24 22:15:53.021288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.021453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.021481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.999 qpair failed and we were unable to recover it. 00:24:10.999 [2024-04-24 22:15:53.021667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.021828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.021855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.999 qpair failed and we were unable to recover it. 00:24:10.999 [2024-04-24 22:15:53.022025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.022184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.022211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.999 qpair failed and we were unable to recover it. 00:24:10.999 [2024-04-24 22:15:53.022409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.022560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.022588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.999 qpair failed and we were unable to recover it. 00:24:10.999 [2024-04-24 22:15:53.022742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.022901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.022928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.999 qpair failed and we were unable to recover it. 00:24:10.999 [2024-04-24 22:15:53.023092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.023276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.023303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.999 qpair failed and we were unable to recover it. 00:24:10.999 [2024-04-24 22:15:53.023466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.023625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.023652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.999 qpair failed and we were unable to recover it. 00:24:10.999 [2024-04-24 22:15:53.023787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.023979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.024006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.999 qpair failed and we were unable to recover it. 00:24:10.999 [2024-04-24 22:15:53.024194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.024378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.024413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.999 qpair failed and we were unable to recover it. 00:24:10.999 [2024-04-24 22:15:53.024606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.024788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.024816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.999 qpair failed and we were unable to recover it. 00:24:10.999 [2024-04-24 22:15:53.024985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.025145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.025172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.999 qpair failed and we were unable to recover it. 00:24:10.999 [2024-04-24 22:15:53.025356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.025522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.025549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d68000b90 with addr=10.0.0.2, port=4420 00:24:10.999 qpair failed and we were unable to recover it. 00:24:10.999 [2024-04-24 22:15:53.025725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b04690 is same with the state(5) to be set 00:24:10.999 [2024-04-24 22:15:53.025945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.026126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.026157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:10.999 qpair failed and we were unable to recover it. 00:24:10.999 [2024-04-24 22:15:53.026355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.026524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.999 [2024-04-24 22:15:53.026552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:10.999 qpair failed and we were unable to recover it. 00:24:11.000 [2024-04-24 22:15:53.026716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.026875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.026902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.000 qpair failed and we were unable to recover it. 00:24:11.000 [2024-04-24 22:15:53.027091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.027222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.027248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.000 qpair failed and we were unable to recover it. 00:24:11.000 [2024-04-24 22:15:53.027410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.027574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.027602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.000 qpair failed and we were unable to recover it. 00:24:11.000 [2024-04-24 22:15:53.027761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.028014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.028042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.000 qpair failed and we were unable to recover it. 00:24:11.000 [2024-04-24 22:15:53.028230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.028358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.028385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.000 qpair failed and we were unable to recover it. 00:24:11.000 [2024-04-24 22:15:53.028583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.028771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.028798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.000 qpair failed and we were unable to recover it. 00:24:11.000 [2024-04-24 22:15:53.028987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.029140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.029167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.000 qpair failed and we were unable to recover it. 00:24:11.000 [2024-04-24 22:15:53.029330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.029494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.029522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.000 qpair failed and we were unable to recover it. 00:24:11.000 [2024-04-24 22:15:53.029686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.029874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.029901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.000 qpair failed and we were unable to recover it. 00:24:11.000 [2024-04-24 22:15:53.030056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.030251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.030278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.000 qpair failed and we were unable to recover it. 00:24:11.000 [2024-04-24 22:15:53.030443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.030596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.030623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.000 qpair failed and we were unable to recover it. 00:24:11.000 [2024-04-24 22:15:53.030813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.031003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.031030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.000 qpair failed and we were unable to recover it. 00:24:11.000 [2024-04-24 22:15:53.031222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.031416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.031444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.000 qpair failed and we were unable to recover it. 00:24:11.000 [2024-04-24 22:15:53.031609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.031771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.031798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.000 qpair failed and we were unable to recover it. 00:24:11.000 [2024-04-24 22:15:53.031936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.032093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.032120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.000 qpair failed and we were unable to recover it. 00:24:11.000 [2024-04-24 22:15:53.032312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.032479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.032507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.000 qpair failed and we were unable to recover it. 00:24:11.000 [2024-04-24 22:15:53.032706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.032865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.032892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.000 qpair failed and we were unable to recover it. 00:24:11.000 [2024-04-24 22:15:53.033045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.033229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.033256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.000 qpair failed and we were unable to recover it. 00:24:11.000 [2024-04-24 22:15:53.033405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.033660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.033687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.000 qpair failed and we were unable to recover it. 00:24:11.000 [2024-04-24 22:15:53.033872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.034037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.034064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.000 qpair failed and we were unable to recover it. 00:24:11.000 [2024-04-24 22:15:53.034311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.034466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.034494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.000 qpair failed and we were unable to recover it. 00:24:11.000 [2024-04-24 22:15:53.034664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.034819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.034846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.000 qpair failed and we were unable to recover it. 00:24:11.000 [2024-04-24 22:15:53.035032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.035218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.035245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.000 qpair failed and we were unable to recover it. 00:24:11.000 [2024-04-24 22:15:53.035434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.035568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.035595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.000 qpair failed and we were unable to recover it. 00:24:11.000 [2024-04-24 22:15:53.035750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.035906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.035933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.000 qpair failed and we were unable to recover it. 00:24:11.000 [2024-04-24 22:15:53.036092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.036280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.036307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.000 qpair failed and we were unable to recover it. 00:24:11.000 [2024-04-24 22:15:53.036525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.036687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.036714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.000 qpair failed and we were unable to recover it. 00:24:11.000 [2024-04-24 22:15:53.036897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.037053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.037080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.000 qpair failed and we were unable to recover it. 00:24:11.000 [2024-04-24 22:15:53.037275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.037434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.000 [2024-04-24 22:15:53.037462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.000 qpair failed and we were unable to recover it. 00:24:11.001 [2024-04-24 22:15:53.037618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.037776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.037802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.001 qpair failed and we were unable to recover it. 00:24:11.001 [2024-04-24 22:15:53.037962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.038118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.038144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.001 qpair failed and we were unable to recover it. 00:24:11.001 [2024-04-24 22:15:53.038273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.038462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.038490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.001 qpair failed and we were unable to recover it. 00:24:11.001 [2024-04-24 22:15:53.038622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.038807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.038834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.001 qpair failed and we were unable to recover it. 00:24:11.001 [2024-04-24 22:15:53.038992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.039180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.039207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.001 qpair failed and we were unable to recover it. 00:24:11.001 [2024-04-24 22:15:53.039366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.039563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.039590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.001 qpair failed and we were unable to recover it. 00:24:11.001 [2024-04-24 22:15:53.039785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.039968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.039995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.001 qpair failed and we were unable to recover it. 00:24:11.001 [2024-04-24 22:15:53.040181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.040337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.040364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.001 qpair failed and we were unable to recover it. 00:24:11.001 [2024-04-24 22:15:53.040533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.040720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.040747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.001 qpair failed and we were unable to recover it. 00:24:11.001 [2024-04-24 22:15:53.040941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.041135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.041162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.001 qpair failed and we were unable to recover it. 00:24:11.001 [2024-04-24 22:15:53.041326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.041486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.041514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.001 qpair failed and we were unable to recover it. 00:24:11.001 [2024-04-24 22:15:53.041645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.041808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.041835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.001 qpair failed and we were unable to recover it. 00:24:11.001 [2024-04-24 22:15:53.041983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.042106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.042132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.001 qpair failed and we were unable to recover it. 00:24:11.001 [2024-04-24 22:15:53.042319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.042511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.042538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.001 qpair failed and we were unable to recover it. 00:24:11.001 [2024-04-24 22:15:53.042696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.042889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.042916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.001 qpair failed and we were unable to recover it. 00:24:11.001 [2024-04-24 22:15:53.043170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.043357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.043384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.001 qpair failed and we were unable to recover it. 00:24:11.001 [2024-04-24 22:15:53.043554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.043705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.043732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.001 qpair failed and we were unable to recover it. 00:24:11.001 [2024-04-24 22:15:53.043861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.044060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.044087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.001 qpair failed and we were unable to recover it. 00:24:11.001 [2024-04-24 22:15:53.044274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.044426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.044454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.001 qpair failed and we were unable to recover it. 00:24:11.001 [2024-04-24 22:15:53.044647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.044808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.044835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.001 qpair failed and we were unable to recover it. 00:24:11.001 [2024-04-24 22:15:53.044989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.045143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.045169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.001 qpair failed and we were unable to recover it. 00:24:11.001 [2024-04-24 22:15:53.045330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.045491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.045519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.001 qpair failed and we were unable to recover it. 00:24:11.001 [2024-04-24 22:15:53.045715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.045902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.045929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.001 qpair failed and we were unable to recover it. 00:24:11.001 [2024-04-24 22:15:53.046092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.046280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.046307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.001 qpair failed and we were unable to recover it. 00:24:11.001 [2024-04-24 22:15:53.046470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.046656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.046683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.001 qpair failed and we were unable to recover it. 00:24:11.001 [2024-04-24 22:15:53.046874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.047037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.047064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.001 qpair failed and we were unable to recover it. 00:24:11.001 [2024-04-24 22:15:53.047189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.047349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.047376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.001 qpair failed and we were unable to recover it. 00:24:11.001 [2024-04-24 22:15:53.047579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.047763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.047790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.001 qpair failed and we were unable to recover it. 00:24:11.001 [2024-04-24 22:15:53.047944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.001 [2024-04-24 22:15:53.048098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.048125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.002 qpair failed and we were unable to recover it. 00:24:11.002 [2024-04-24 22:15:53.048292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.048474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.048502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.002 qpair failed and we were unable to recover it. 00:24:11.002 [2024-04-24 22:15:53.048666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.048854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.048882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.002 qpair failed and we were unable to recover it. 00:24:11.002 [2024-04-24 22:15:53.049048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.049232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.049260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.002 qpair failed and we were unable to recover it. 00:24:11.002 22:15:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:11.002 [2024-04-24 22:15:53.049421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 22:15:53 -- common/autotest_common.sh@850 -- # return 0 00:24:11.002 [2024-04-24 22:15:53.049607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 22:15:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:11.002 [2024-04-24 22:15:53.049635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.002 qpair failed and we were unable to recover it. 00:24:11.002 22:15:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:11.002 [2024-04-24 22:15:53.049850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 22:15:53 -- common/autotest_common.sh@10 -- # set +x 00:24:11.002 [2024-04-24 22:15:53.050010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.050038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.002 qpair failed and we were unable to recover it. 00:24:11.002 [2024-04-24 22:15:53.050195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.050381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.050414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.002 qpair failed and we were unable to recover it. 00:24:11.002 [2024-04-24 22:15:53.050607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.050764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.050792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.002 qpair failed and we were unable to recover it. 00:24:11.002 [2024-04-24 22:15:53.050951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.051145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.051172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.002 qpair failed and we were unable to recover it. 00:24:11.002 [2024-04-24 22:15:53.051329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.051507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.051535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.002 qpair failed and we were unable to recover it. 00:24:11.002 [2024-04-24 22:15:53.051689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.051826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.051857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.002 qpair failed and we were unable to recover it. 00:24:11.002 [2024-04-24 22:15:53.051992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.052124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.052151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.002 qpair failed and we were unable to recover it. 00:24:11.002 [2024-04-24 22:15:53.052339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.052497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.052526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.002 qpair failed and we were unable to recover it. 00:24:11.002 [2024-04-24 22:15:53.052728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.052913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.052940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.002 qpair failed and we were unable to recover it. 00:24:11.002 [2024-04-24 22:15:53.053070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.053228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.053255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.002 qpair failed and we were unable to recover it. 00:24:11.002 [2024-04-24 22:15:53.053388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.053551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.053579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.002 qpair failed and we were unable to recover it. 00:24:11.002 [2024-04-24 22:15:53.053740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.053922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.053950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.002 qpair failed and we were unable to recover it. 00:24:11.002 [2024-04-24 22:15:53.054140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.054325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.054352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.002 qpair failed and we were unable to recover it. 00:24:11.002 [2024-04-24 22:15:53.054542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.054686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.054714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.002 qpair failed and we were unable to recover it. 00:24:11.002 [2024-04-24 22:15:53.054900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.055088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.055116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.002 qpair failed and we were unable to recover it. 00:24:11.002 [2024-04-24 22:15:53.055280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.055469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.055503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.002 qpair failed and we were unable to recover it. 00:24:11.002 [2024-04-24 22:15:53.055659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.055853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.055882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.002 qpair failed and we were unable to recover it. 00:24:11.002 [2024-04-24 22:15:53.056061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.056243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.056270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.002 qpair failed and we were unable to recover it. 00:24:11.002 [2024-04-24 22:15:53.056403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.056594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.056621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.002 qpair failed and we were unable to recover it. 00:24:11.002 [2024-04-24 22:15:53.056779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.056965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.056992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.002 qpair failed and we were unable to recover it. 00:24:11.002 [2024-04-24 22:15:53.057186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.057382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.057417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.002 qpair failed and we were unable to recover it. 00:24:11.002 [2024-04-24 22:15:53.057604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.057793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.057820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.002 qpair failed and we were unable to recover it. 00:24:11.002 [2024-04-24 22:15:53.057978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.058163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.058190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.002 qpair failed and we were unable to recover it. 00:24:11.002 [2024-04-24 22:15:53.058323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.002 [2024-04-24 22:15:53.058466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.058494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.003 qpair failed and we were unable to recover it. 00:24:11.003 [2024-04-24 22:15:53.058678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.058840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.058867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.003 qpair failed and we were unable to recover it. 00:24:11.003 [2024-04-24 22:15:53.059055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.059214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.059246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.003 qpair failed and we were unable to recover it. 00:24:11.003 [2024-04-24 22:15:53.059408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.059548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.059575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.003 qpair failed and we were unable to recover it. 00:24:11.003 [2024-04-24 22:15:53.059764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.059927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.059955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.003 qpair failed and we were unable to recover it. 00:24:11.003 [2024-04-24 22:15:53.060122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.060279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.060307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.003 qpair failed and we were unable to recover it. 00:24:11.003 [2024-04-24 22:15:53.060493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.060651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.060679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.003 qpair failed and we were unable to recover it. 00:24:11.003 [2024-04-24 22:15:53.060875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.061017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.061044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.003 qpair failed and we were unable to recover it. 00:24:11.003 [2024-04-24 22:15:53.061209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.061348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.061376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.003 qpair failed and we were unable to recover it. 00:24:11.003 [2024-04-24 22:15:53.061559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.061694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.061721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.003 qpair failed and we were unable to recover it. 00:24:11.003 [2024-04-24 22:15:53.061896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.062073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.062099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.003 qpair failed and we were unable to recover it. 00:24:11.003 [2024-04-24 22:15:53.062259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.062434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.062461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.003 qpair failed and we were unable to recover it. 00:24:11.003 [2024-04-24 22:15:53.062601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.062735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.062766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.003 qpair failed and we were unable to recover it. 00:24:11.003 [2024-04-24 22:15:53.062965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.063132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.063160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.003 qpair failed and we were unable to recover it. 00:24:11.003 [2024-04-24 22:15:53.063316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.063456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.063485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.003 qpair failed and we were unable to recover it. 00:24:11.003 [2024-04-24 22:15:53.063640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.063793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.063821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.003 qpair failed and we were unable to recover it. 00:24:11.003 [2024-04-24 22:15:53.063984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.064167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.064195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.003 qpair failed and we were unable to recover it. 00:24:11.003 [2024-04-24 22:15:53.064324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.064517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.064545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.003 qpair failed and we were unable to recover it. 00:24:11.003 [2024-04-24 22:15:53.064731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.064891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.064918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.003 qpair failed and we were unable to recover it. 00:24:11.003 [2024-04-24 22:15:53.065108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.065295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.065323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.003 qpair failed and we were unable to recover it. 00:24:11.003 [2024-04-24 22:15:53.065506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.065666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.065694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.003 qpair failed and we were unable to recover it. 00:24:11.003 [2024-04-24 22:15:53.065879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.066039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.066066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.003 qpair failed and we were unable to recover it. 00:24:11.003 [2024-04-24 22:15:53.066261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.066385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.066421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.003 qpair failed and we were unable to recover it. 00:24:11.003 [2024-04-24 22:15:53.066581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.066766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.066794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.003 qpair failed and we were unable to recover it. 00:24:11.003 [2024-04-24 22:15:53.066985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.067171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.003 [2024-04-24 22:15:53.067198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.003 qpair failed and we were unable to recover it. 00:24:11.004 [2024-04-24 22:15:53.067357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.067504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.067532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.004 qpair failed and we were unable to recover it. 00:24:11.004 [2024-04-24 22:15:53.067673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.067822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.067849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.004 qpair failed and we were unable to recover it. 00:24:11.004 [2024-04-24 22:15:53.068012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.068137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.068166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.004 qpair failed and we were unable to recover it. 00:24:11.004 [2024-04-24 22:15:53.068358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.068521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.068549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.004 qpair failed and we were unable to recover it. 00:24:11.004 [2024-04-24 22:15:53.068681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.068843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.068870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.004 qpair failed and we were unable to recover it. 00:24:11.004 [2024-04-24 22:15:53.069039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.069175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.069202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.004 qpair failed and we were unable to recover it. 00:24:11.004 22:15:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:11.004 [2024-04-24 22:15:53.069386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 22:15:53 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:11.004 [2024-04-24 22:15:53.069532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.069563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.004 qpair failed and we were unable to recover it. 00:24:11.004 22:15:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.004 [2024-04-24 22:15:53.069753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 22:15:53 -- common/autotest_common.sh@10 -- # set +x 00:24:11.004 [2024-04-24 22:15:53.069921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.069950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.004 qpair failed and we were unable to recover it. 00:24:11.004 [2024-04-24 22:15:53.070083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.070245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.070272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.004 qpair failed and we were unable to recover it. 00:24:11.004 [2024-04-24 22:15:53.070410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.070540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.070567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.004 qpair failed and we were unable to recover it. 00:24:11.004 [2024-04-24 22:15:53.070757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.070915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.070942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.004 qpair failed and we were unable to recover it. 00:24:11.004 [2024-04-24 22:15:53.071097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.071289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.071316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.004 qpair failed and we were unable to recover it. 00:24:11.004 [2024-04-24 22:15:53.071450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.071639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.071666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.004 qpair failed and we were unable to recover it. 00:24:11.004 [2024-04-24 22:15:53.071860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.072019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.072046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.004 qpair failed and we were unable to recover it. 00:24:11.004 [2024-04-24 22:15:53.072198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.072335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.072362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.004 qpair failed and we were unable to recover it. 00:24:11.004 [2024-04-24 22:15:53.072511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.072634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.072661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.004 qpair failed and we were unable to recover it. 00:24:11.004 [2024-04-24 22:15:53.072811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.073000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.073027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.004 qpair failed and we were unable to recover it. 00:24:11.004 [2024-04-24 22:15:53.073189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.073372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.073408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.004 qpair failed and we were unable to recover it. 00:24:11.004 [2024-04-24 22:15:53.073545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.073683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.073710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.004 qpair failed and we were unable to recover it. 00:24:11.004 [2024-04-24 22:15:53.073873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.074032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.074060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.004 qpair failed and we were unable to recover it. 00:24:11.004 [2024-04-24 22:15:53.074246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.074404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.074432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.004 qpair failed and we were unable to recover it. 00:24:11.004 [2024-04-24 22:15:53.074554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.074738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.074766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.004 qpair failed and we were unable to recover it. 00:24:11.004 [2024-04-24 22:15:53.074927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.075115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.075142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.004 qpair failed and we were unable to recover it. 00:24:11.004 [2024-04-24 22:15:53.075327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.075492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.075520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.004 qpair failed and we were unable to recover it. 00:24:11.004 [2024-04-24 22:15:53.075648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.075804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.075831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.004 qpair failed and we were unable to recover it. 00:24:11.004 [2024-04-24 22:15:53.076016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.076144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.076170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.004 qpair failed and we were unable to recover it. 00:24:11.004 [2024-04-24 22:15:53.076330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.076497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.076524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.004 qpair failed and we were unable to recover it. 00:24:11.004 [2024-04-24 22:15:53.076693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.076880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.004 [2024-04-24 22:15:53.076907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.004 qpair failed and we were unable to recover it. 00:24:11.005 [2024-04-24 22:15:53.077027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.077158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.077185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.005 qpair failed and we were unable to recover it. 00:24:11.005 [2024-04-24 22:15:53.077381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.077545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.077572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.005 qpair failed and we were unable to recover it. 00:24:11.005 [2024-04-24 22:15:53.077727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.077886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.077913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.005 qpair failed and we were unable to recover it. 00:24:11.005 [2024-04-24 22:15:53.078039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.078223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.078249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.005 qpair failed and we were unable to recover it. 00:24:11.005 [2024-04-24 22:15:53.078443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.078573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.078600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.005 qpair failed and we were unable to recover it. 00:24:11.005 [2024-04-24 22:15:53.078785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.078918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.078945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.005 qpair failed and we were unable to recover it. 00:24:11.005 [2024-04-24 22:15:53.079127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.079284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.079311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.005 qpair failed and we were unable to recover it. 00:24:11.005 [2024-04-24 22:15:53.079476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.079638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.079665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.005 qpair failed and we were unable to recover it. 00:24:11.005 [2024-04-24 22:15:53.079853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.080016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.080042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.005 qpair failed and we were unable to recover it. 00:24:11.005 [2024-04-24 22:15:53.080234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.080371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.080424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.005 qpair failed and we were unable to recover it. 00:24:11.005 [2024-04-24 22:15:53.080594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.080757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.080784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.005 qpair failed and we were unable to recover it. 00:24:11.005 [2024-04-24 22:15:53.080945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.081134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.081161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.005 qpair failed and we were unable to recover it. 00:24:11.005 [2024-04-24 22:15:53.081347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.081510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.081538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.005 qpair failed and we were unable to recover it. 00:24:11.005 [2024-04-24 22:15:53.081672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.081833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.081860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.005 qpair failed and we were unable to recover it. 00:24:11.005 [2024-04-24 22:15:53.082021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.082209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.082236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.005 qpair failed and we were unable to recover it. 00:24:11.005 [2024-04-24 22:15:53.082432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.082584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.082611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.005 qpair failed and we were unable to recover it. 00:24:11.005 [2024-04-24 22:15:53.082761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.082953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.082980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.005 qpair failed and we were unable to recover it. 00:24:11.005 [2024-04-24 22:15:53.083131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.083345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.083372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.005 qpair failed and we were unable to recover it. 00:24:11.005 [2024-04-24 22:15:53.083526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.083691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.083718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.005 qpair failed and we were unable to recover it. 00:24:11.005 [2024-04-24 22:15:53.083855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.084050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.084078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.005 qpair failed and we were unable to recover it. 00:24:11.005 [2024-04-24 22:15:53.084238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.084405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.084433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.005 qpair failed and we were unable to recover it. 00:24:11.005 [2024-04-24 22:15:53.084620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.084777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.084804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.005 qpair failed and we were unable to recover it. 00:24:11.005 [2024-04-24 22:15:53.084962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.085125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.085152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.005 qpair failed and we were unable to recover it. 00:24:11.005 [2024-04-24 22:15:53.085334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.085502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.085530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.005 qpair failed and we were unable to recover it. 00:24:11.005 [2024-04-24 22:15:53.085687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.085874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.085901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.005 qpair failed and we were unable to recover it. 00:24:11.005 [2024-04-24 22:15:53.086067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.086233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.086260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.005 qpair failed and we were unable to recover it. 00:24:11.005 [2024-04-24 22:15:53.086447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.086579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.086606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.005 qpair failed and we were unable to recover it. 00:24:11.005 [2024-04-24 22:15:53.086795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.086949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.086976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.005 qpair failed and we were unable to recover it. 00:24:11.005 [2024-04-24 22:15:53.087127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.087289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.005 [2024-04-24 22:15:53.087316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.006 qpair failed and we were unable to recover it. 00:24:11.006 [2024-04-24 22:15:53.087495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.087625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.087652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.006 qpair failed and we were unable to recover it. 00:24:11.006 [2024-04-24 22:15:53.087813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.088011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.088038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.006 qpair failed and we were unable to recover it. 00:24:11.006 [2024-04-24 22:15:53.088181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.088341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.088368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.006 qpair failed and we were unable to recover it. 00:24:11.006 [2024-04-24 22:15:53.088531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.088714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.088741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.006 qpair failed and we were unable to recover it. 00:24:11.006 [2024-04-24 22:15:53.088869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.089035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.089062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.006 qpair failed and we were unable to recover it. 00:24:11.006 [2024-04-24 22:15:53.089196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.089363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.089390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.006 qpair failed and we were unable to recover it. 00:24:11.006 [2024-04-24 22:15:53.089555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.089721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.089748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.006 qpair failed and we were unable to recover it. 00:24:11.006 [2024-04-24 22:15:53.089881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.090075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.090102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.006 qpair failed and we were unable to recover it. 00:24:11.006 [2024-04-24 22:15:53.090292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.090423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.090451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.006 qpair failed and we were unable to recover it. 00:24:11.006 [2024-04-24 22:15:53.090587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.090748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.090775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.006 qpair failed and we were unable to recover it. 00:24:11.006 [2024-04-24 22:15:53.090928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.091067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.091095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.006 qpair failed and we were unable to recover it. 00:24:11.006 [2024-04-24 22:15:53.091284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.091451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.091478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.006 qpair failed and we were unable to recover it. 00:24:11.006 [2024-04-24 22:15:53.091633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.091796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.091822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.006 qpair failed and we were unable to recover it. 00:24:11.006 [2024-04-24 22:15:53.091982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.092144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.092170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.006 qpair failed and we were unable to recover it. 00:24:11.006 [2024-04-24 22:15:53.092348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.092566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.092595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.006 qpair failed and we were unable to recover it. 00:24:11.006 [2024-04-24 22:15:53.092760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.092949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.092977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.006 qpair failed and we were unable to recover it. 00:24:11.006 [2024-04-24 22:15:53.093138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.093294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.093322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.006 qpair failed and we were unable to recover it. 00:24:11.006 [2024-04-24 22:15:53.093517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.093705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.093733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.006 qpair failed and we were unable to recover it. 00:24:11.006 Malloc0 00:24:11.006 [2024-04-24 22:15:53.093865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.094027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.094054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.006 qpair failed and we were unable to recover it. 00:24:11.006 [2024-04-24 22:15:53.094202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 22:15:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.006 [2024-04-24 22:15:53.094342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.094370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.006 qpair failed and we were unable to recover it. 00:24:11.006 22:15:53 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:11.006 22:15:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.006 [2024-04-24 22:15:53.094544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 22:15:53 -- common/autotest_common.sh@10 -- # set +x 00:24:11.006 [2024-04-24 22:15:53.094703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.094732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.006 qpair failed and we were unable to recover it. 00:24:11.006 [2024-04-24 22:15:53.094911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.095074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.095101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.006 qpair failed and we were unable to recover it. 00:24:11.006 [2024-04-24 22:15:53.095236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.095392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.095429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.006 qpair failed and we were unable to recover it. 00:24:11.006 [2024-04-24 22:15:53.095572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.095738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.095765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.006 qpair failed and we were unable to recover it. 00:24:11.006 [2024-04-24 22:15:53.095929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.096062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.096089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.006 qpair failed and we were unable to recover it. 00:24:11.006 [2024-04-24 22:15:53.096279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.096451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.096479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.006 qpair failed and we were unable to recover it. 00:24:11.006 [2024-04-24 22:15:53.096622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.096778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.096805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.006 qpair failed and we were unable to recover it. 00:24:11.006 [2024-04-24 22:15:53.096973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.097110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.006 [2024-04-24 22:15:53.097137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.006 qpair failed and we were unable to recover it. 00:24:11.007 [2024-04-24 22:15:53.097281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.097473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.097502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.007 qpair failed and we were unable to recover it. 00:24:11.007 [2024-04-24 22:15:53.097585] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.007 [2024-04-24 22:15:53.097670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.097800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.097832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.007 qpair failed and we were unable to recover it. 00:24:11.007 [2024-04-24 22:15:53.097997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.098160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.098187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.007 qpair failed and we were unable to recover it. 00:24:11.007 [2024-04-24 22:15:53.098348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.098512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.098540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.007 qpair failed and we were unable to recover it. 00:24:11.007 [2024-04-24 22:15:53.098680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.098868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.098895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.007 qpair failed and we were unable to recover it. 00:24:11.007 [2024-04-24 22:15:53.099043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.099199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.099226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.007 qpair failed and we were unable to recover it. 00:24:11.007 [2024-04-24 22:15:53.099403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.099570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.099597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.007 qpair failed and we were unable to recover it. 00:24:11.007 [2024-04-24 22:15:53.099783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.099940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.099967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.007 qpair failed and we were unable to recover it. 00:24:11.007 [2024-04-24 22:15:53.100154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.100275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.100302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.007 qpair failed and we were unable to recover it. 00:24:11.007 [2024-04-24 22:15:53.100475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.100616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.100644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.007 qpair failed and we were unable to recover it. 00:24:11.007 [2024-04-24 22:15:53.100820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.100960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.100987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.007 qpair failed and we were unable to recover it. 00:24:11.007 [2024-04-24 22:15:53.101146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.101288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.101319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.007 qpair failed and we were unable to recover it. 00:24:11.007 [2024-04-24 22:15:53.101490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.101656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.101683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.007 qpair failed and we were unable to recover it. 00:24:11.007 [2024-04-24 22:15:53.101815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.101976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.102003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.007 qpair failed and we were unable to recover it. 00:24:11.007 [2024-04-24 22:15:53.102189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.102313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.102340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.007 qpair failed and we were unable to recover it. 00:24:11.007 [2024-04-24 22:15:53.102487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.102625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.102652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.007 qpair failed and we were unable to recover it. 00:24:11.007 [2024-04-24 22:15:53.102793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.102935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.102962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.007 qpair failed and we were unable to recover it. 00:24:11.007 [2024-04-24 22:15:53.103110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.103260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.103286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.007 qpair failed and we were unable to recover it. 00:24:11.007 [2024-04-24 22:15:53.103486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.103650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.103676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.007 qpair failed and we were unable to recover it. 00:24:11.007 [2024-04-24 22:15:53.103840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.103999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.104026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.007 qpair failed and we were unable to recover it. 00:24:11.007 [2024-04-24 22:15:53.104211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.104370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.104406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.007 qpair failed and we were unable to recover it. 00:24:11.007 [2024-04-24 22:15:53.104598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.104792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.104820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.007 qpair failed and we were unable to recover it. 00:24:11.007 [2024-04-24 22:15:53.105018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.105198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.105225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.007 qpair failed and we were unable to recover it. 00:24:11.007 [2024-04-24 22:15:53.105380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.105577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.105604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.007 qpair failed and we were unable to recover it. 00:24:11.007 [2024-04-24 22:15:53.105741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 22:15:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.007 [2024-04-24 22:15:53.105870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 [2024-04-24 22:15:53.105898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.007 qpair failed and we were unable to recover it. 00:24:11.007 22:15:53 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:11.007 [2024-04-24 22:15:53.106059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.007 22:15:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.008 [2024-04-24 22:15:53.106222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 22:15:53 -- common/autotest_common.sh@10 -- # set +x 00:24:11.008 [2024-04-24 22:15:53.106250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.008 qpair failed and we were unable to recover it. 00:24:11.008 [2024-04-24 22:15:53.106389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.106536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.106565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.008 qpair failed and we were unable to recover it. 00:24:11.008 [2024-04-24 22:15:53.106717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.106904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.106931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.008 qpair failed and we were unable to recover it. 00:24:11.008 [2024-04-24 22:15:53.107091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.107256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.107283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.008 qpair failed and we were unable to recover it. 00:24:11.008 [2024-04-24 22:15:53.107426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.107589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.107617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.008 qpair failed and we were unable to recover it. 00:24:11.008 [2024-04-24 22:15:53.107750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.107890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.107917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.008 qpair failed and we were unable to recover it. 00:24:11.008 [2024-04-24 22:15:53.108101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.108243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.108270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.008 qpair failed and we were unable to recover it. 00:24:11.008 [2024-04-24 22:15:53.108430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.108563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.108591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.008 qpair failed and we were unable to recover it. 00:24:11.008 [2024-04-24 22:15:53.108729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.108893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.108921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.008 qpair failed and we were unable to recover it. 00:24:11.008 [2024-04-24 22:15:53.109108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.109234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.109261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.008 qpair failed and we were unable to recover it. 00:24:11.008 [2024-04-24 22:15:53.109427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.109563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.109590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.008 qpair failed and we were unable to recover it. 00:24:11.008 [2024-04-24 22:15:53.109781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.109946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.109973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.008 qpair failed and we were unable to recover it. 00:24:11.008 [2024-04-24 22:15:53.110134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.110312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.110340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.008 qpair failed and we were unable to recover it. 00:24:11.008 [2024-04-24 22:15:53.110507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.110644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.110671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.008 qpair failed and we were unable to recover it. 00:24:11.008 [2024-04-24 22:15:53.110838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.110996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.111023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.008 qpair failed and we were unable to recover it. 00:24:11.008 [2024-04-24 22:15:53.111216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.111407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.111435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.008 qpair failed and we were unable to recover it. 00:24:11.008 [2024-04-24 22:15:53.111582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.111741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.111768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.008 qpair failed and we were unable to recover it. 00:24:11.008 [2024-04-24 22:15:53.111908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.112073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.112102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.008 qpair failed and we were unable to recover it. 00:24:11.008 [2024-04-24 22:15:53.112235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.112404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.112433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.008 qpair failed and we were unable to recover it. 00:24:11.008 [2024-04-24 22:15:53.112564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.112695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.112722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.008 qpair failed and we were unable to recover it. 00:24:11.008 [2024-04-24 22:15:53.112904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.113064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.113091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.008 qpair failed and we were unable to recover it. 00:24:11.008 [2024-04-24 22:15:53.113234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.113388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.113424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.008 qpair failed and we were unable to recover it. 00:24:11.008 [2024-04-24 22:15:53.113564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.113753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.113781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.008 qpair failed and we were unable to recover it. 00:24:11.008 22:15:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.008 [2024-04-24 22:15:53.113934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 22:15:53 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:11.008 [2024-04-24 22:15:53.114093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 22:15:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.008 [2024-04-24 22:15:53.114120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.008 qpair failed and we were unable to recover it. 00:24:11.008 22:15:53 -- common/autotest_common.sh@10 -- # set +x 00:24:11.008 [2024-04-24 22:15:53.114285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.114434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.114462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.008 qpair failed and we were unable to recover it. 00:24:11.008 [2024-04-24 22:15:53.114621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.114815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.008 [2024-04-24 22:15:53.114842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.008 qpair failed and we were unable to recover it. 00:24:11.009 [2024-04-24 22:15:53.114983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.115134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.115162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.009 qpair failed and we were unable to recover it. 00:24:11.009 [2024-04-24 22:15:53.115316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.115464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.115492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.009 qpair failed and we were unable to recover it. 00:24:11.009 [2024-04-24 22:15:53.115625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.115786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.115813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.009 qpair failed and we were unable to recover it. 00:24:11.009 [2024-04-24 22:15:53.115975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.116154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.116183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.009 qpair failed and we were unable to recover it. 00:24:11.009 [2024-04-24 22:15:53.116381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.116554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.116582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.009 qpair failed and we were unable to recover it. 00:24:11.009 [2024-04-24 22:15:53.116745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.116904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.116932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.009 qpair failed and we were unable to recover it. 00:24:11.009 [2024-04-24 22:15:53.117064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.117251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.117279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.009 qpair failed and we were unable to recover it. 00:24:11.009 [2024-04-24 22:15:53.117444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.117569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.117597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.009 qpair failed and we were unable to recover it. 00:24:11.009 [2024-04-24 22:15:53.117743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.117878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.117905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.009 qpair failed and we were unable to recover it. 00:24:11.009 [2024-04-24 22:15:53.118073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.118205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.118237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.009 qpair failed and we were unable to recover it. 00:24:11.009 [2024-04-24 22:15:53.118375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.118540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.118568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.009 qpair failed and we were unable to recover it. 00:24:11.009 [2024-04-24 22:15:53.118729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.118856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.118883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.009 qpair failed and we were unable to recover it. 00:24:11.009 [2024-04-24 22:15:53.119057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.119265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.119293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.009 qpair failed and we were unable to recover it. 00:24:11.009 [2024-04-24 22:15:53.119476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.119632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.119660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.009 qpair failed and we were unable to recover it. 00:24:11.009 [2024-04-24 22:15:53.119806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.119949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.119976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.009 qpair failed and we were unable to recover it. 00:24:11.009 [2024-04-24 22:15:53.120135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.120293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.120320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.009 qpair failed and we were unable to recover it. 00:24:11.009 [2024-04-24 22:15:53.120478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.120628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.120655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.009 qpair failed and we were unable to recover it. 00:24:11.009 [2024-04-24 22:15:53.120813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.120942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.120970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.009 qpair failed and we were unable to recover it. 00:24:11.009 [2024-04-24 22:15:53.121132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.121295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.121322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.009 qpair failed and we were unable to recover it. 00:24:11.009 [2024-04-24 22:15:53.121472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.121598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.121630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.009 qpair failed and we were unable to recover it. 00:24:11.009 [2024-04-24 22:15:53.121794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 22:15:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.009 [2024-04-24 22:15:53.121947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.121975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.009 qpair failed and we were unable to recover it. 00:24:11.009 22:15:53 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:11.009 [2024-04-24 22:15:53.122134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 22:15:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.009 [2024-04-24 22:15:53.122292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 22:15:53 -- common/autotest_common.sh@10 -- # set +x 00:24:11.009 [2024-04-24 22:15:53.122320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.009 qpair failed and we were unable to recover it. 00:24:11.009 [2024-04-24 22:15:53.122461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.122600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.122627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.009 qpair failed and we were unable to recover it. 00:24:11.009 [2024-04-24 22:15:53.122817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.122945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.122972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.009 qpair failed and we were unable to recover it. 00:24:11.009 [2024-04-24 22:15:53.123130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.123295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.123322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.009 qpair failed and we were unable to recover it. 00:24:11.009 [2024-04-24 22:15:53.123464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.123617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.123644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.009 qpair failed and we were unable to recover it. 00:24:11.009 [2024-04-24 22:15:53.123807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.123940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.123967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.009 qpair failed and we were unable to recover it. 00:24:11.009 [2024-04-24 22:15:53.124134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.124294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.009 [2024-04-24 22:15:53.124320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.009 qpair failed and we were unable to recover it. 00:24:11.010 [2024-04-24 22:15:53.124508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.010 [2024-04-24 22:15:53.124641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.010 [2024-04-24 22:15:53.124668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.010 qpair failed and we were unable to recover it. 00:24:11.010 [2024-04-24 22:15:53.124804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.010 [2024-04-24 22:15:53.124988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.010 [2024-04-24 22:15:53.125015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.010 qpair failed and we were unable to recover it. 00:24:11.010 [2024-04-24 22:15:53.125152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.010 [2024-04-24 22:15:53.125299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.010 [2024-04-24 22:15:53.125326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.010 qpair failed and we were unable to recover it. 00:24:11.010 [2024-04-24 22:15:53.125456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.010 [2024-04-24 22:15:53.125588] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:11.010 [2024-04-24 22:15:53.125599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.010 [2024-04-24 22:15:53.125626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d78000b90 with addr=10.0.0.2, port=4420 00:24:11.010 qpair failed and we were unable to recover it. 00:24:11.010 [2024-04-24 22:15:53.125809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.010 [2024-04-24 22:15:53.125896] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.010 [2024-04-24 22:15:53.129206] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:24:11.010 [2024-04-24 22:15:53.129274] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f9d78000b90 (107): Transport endpoint is not connected 00:24:11.010 [2024-04-24 22:15:53.129349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.010 qpair failed and we were unable to recover it. 00:24:11.010 22:15:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.010 22:15:53 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:11.010 22:15:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.010 22:15:53 -- common/autotest_common.sh@10 -- # set +x 00:24:11.010 22:15:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.010 22:15:53 -- host/target_disconnect.sh@58 -- # wait 4039190 00:24:11.010 [2024-04-24 22:15:53.138323] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.010 [2024-04-24 22:15:53.138504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.010 [2024-04-24 22:15:53.138536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.010 [2024-04-24 22:15:53.138553] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.010 [2024-04-24 22:15:53.138567] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.010 [2024-04-24 22:15:53.138601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.010 qpair failed and we were unable to recover it. 00:24:11.010 [2024-04-24 22:15:53.148244] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.010 [2024-04-24 22:15:53.148376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.010 [2024-04-24 22:15:53.148425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.010 [2024-04-24 22:15:53.148443] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.010 [2024-04-24 22:15:53.148463] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.010 [2024-04-24 22:15:53.148497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.010 qpair failed and we were unable to recover it. 00:24:11.010 [2024-04-24 22:15:53.158260] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.010 [2024-04-24 22:15:53.158415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.010 [2024-04-24 22:15:53.158445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.010 [2024-04-24 22:15:53.158461] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.010 [2024-04-24 22:15:53.158475] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.010 [2024-04-24 22:15:53.158507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.010 qpair failed and we were unable to recover it. 00:24:11.010 [2024-04-24 22:15:53.168224] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.010 [2024-04-24 22:15:53.168352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.010 [2024-04-24 22:15:53.168381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.010 [2024-04-24 22:15:53.168406] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.010 [2024-04-24 22:15:53.168421] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.010 [2024-04-24 22:15:53.168453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.010 qpair failed and we were unable to recover it. 00:24:11.010 [2024-04-24 22:15:53.178248] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.010 [2024-04-24 22:15:53.178379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.010 [2024-04-24 22:15:53.178415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.010 [2024-04-24 22:15:53.178432] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.010 [2024-04-24 22:15:53.178446] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.010 [2024-04-24 22:15:53.178478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.010 qpair failed and we were unable to recover it. 00:24:11.010 [2024-04-24 22:15:53.188298] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.010 [2024-04-24 22:15:53.188472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.010 [2024-04-24 22:15:53.188502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.010 [2024-04-24 22:15:53.188520] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.010 [2024-04-24 22:15:53.188533] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.010 [2024-04-24 22:15:53.188565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.010 qpair failed and we were unable to recover it. 00:24:11.010 [2024-04-24 22:15:53.198310] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.010 [2024-04-24 22:15:53.198483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.010 [2024-04-24 22:15:53.198512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.010 [2024-04-24 22:15:53.198529] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.010 [2024-04-24 22:15:53.198542] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.010 [2024-04-24 22:15:53.198574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.010 qpair failed and we were unable to recover it. 00:24:11.010 [2024-04-24 22:15:53.208344] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.010 [2024-04-24 22:15:53.208490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.010 [2024-04-24 22:15:53.208519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.010 [2024-04-24 22:15:53.208536] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.010 [2024-04-24 22:15:53.208549] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.010 [2024-04-24 22:15:53.208582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.010 qpair failed and we were unable to recover it. 00:24:11.010 [2024-04-24 22:15:53.218352] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.010 [2024-04-24 22:15:53.218504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.010 [2024-04-24 22:15:53.218532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.010 [2024-04-24 22:15:53.218549] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.010 [2024-04-24 22:15:53.218562] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.010 [2024-04-24 22:15:53.218596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.010 qpair failed and we were unable to recover it. 00:24:11.010 [2024-04-24 22:15:53.228377] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.010 [2024-04-24 22:15:53.228507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.010 [2024-04-24 22:15:53.228536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.010 [2024-04-24 22:15:53.228552] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.010 [2024-04-24 22:15:53.228565] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.010 [2024-04-24 22:15:53.228597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.010 qpair failed and we were unable to recover it. 00:24:11.270 [2024-04-24 22:15:53.238434] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.270 [2024-04-24 22:15:53.238598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.270 [2024-04-24 22:15:53.238627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.270 [2024-04-24 22:15:53.238649] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.270 [2024-04-24 22:15:53.238663] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.270 [2024-04-24 22:15:53.238696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.270 qpair failed and we were unable to recover it. 00:24:11.270 [2024-04-24 22:15:53.248452] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.270 [2024-04-24 22:15:53.248605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.270 [2024-04-24 22:15:53.248635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.270 [2024-04-24 22:15:53.248651] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.270 [2024-04-24 22:15:53.248664] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.270 [2024-04-24 22:15:53.248696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.270 qpair failed and we were unable to recover it. 00:24:11.270 [2024-04-24 22:15:53.258496] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.270 [2024-04-24 22:15:53.258628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.270 [2024-04-24 22:15:53.258657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.270 [2024-04-24 22:15:53.258673] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.270 [2024-04-24 22:15:53.258686] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.270 [2024-04-24 22:15:53.258718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.270 qpair failed and we were unable to recover it. 00:24:11.270 [2024-04-24 22:15:53.268487] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.270 [2024-04-24 22:15:53.268615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.270 [2024-04-24 22:15:53.268643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.270 [2024-04-24 22:15:53.268660] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.270 [2024-04-24 22:15:53.268673] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.270 [2024-04-24 22:15:53.268705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.270 qpair failed and we were unable to recover it. 00:24:11.270 [2024-04-24 22:15:53.278534] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.271 [2024-04-24 22:15:53.278689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.271 [2024-04-24 22:15:53.278717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.271 [2024-04-24 22:15:53.278733] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.271 [2024-04-24 22:15:53.278746] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.271 [2024-04-24 22:15:53.278778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.271 qpair failed and we were unable to recover it. 00:24:11.271 [2024-04-24 22:15:53.288567] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.271 [2024-04-24 22:15:53.288696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.271 [2024-04-24 22:15:53.288725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.271 [2024-04-24 22:15:53.288741] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.271 [2024-04-24 22:15:53.288755] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.271 [2024-04-24 22:15:53.288786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.271 qpair failed and we were unable to recover it. 00:24:11.271 [2024-04-24 22:15:53.298597] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.271 [2024-04-24 22:15:53.298733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.271 [2024-04-24 22:15:53.298762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.271 [2024-04-24 22:15:53.298778] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.271 [2024-04-24 22:15:53.298791] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.271 [2024-04-24 22:15:53.298824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.271 qpair failed and we were unable to recover it. 00:24:11.271 [2024-04-24 22:15:53.308611] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.271 [2024-04-24 22:15:53.308746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.271 [2024-04-24 22:15:53.308774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.271 [2024-04-24 22:15:53.308790] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.271 [2024-04-24 22:15:53.308804] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.271 [2024-04-24 22:15:53.308836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.271 qpair failed and we were unable to recover it. 00:24:11.271 [2024-04-24 22:15:53.318628] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.271 [2024-04-24 22:15:53.318756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.271 [2024-04-24 22:15:53.318784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.271 [2024-04-24 22:15:53.318800] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.271 [2024-04-24 22:15:53.318813] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.271 [2024-04-24 22:15:53.318847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.271 qpair failed and we were unable to recover it. 00:24:11.271 [2024-04-24 22:15:53.328688] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.271 [2024-04-24 22:15:53.328824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.271 [2024-04-24 22:15:53.328859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.271 [2024-04-24 22:15:53.328876] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.271 [2024-04-24 22:15:53.328889] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.271 [2024-04-24 22:15:53.328921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.271 qpair failed and we were unable to recover it. 00:24:11.271 [2024-04-24 22:15:53.338700] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.271 [2024-04-24 22:15:53.338826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.271 [2024-04-24 22:15:53.338851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.271 [2024-04-24 22:15:53.338867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.271 [2024-04-24 22:15:53.338880] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.271 [2024-04-24 22:15:53.338912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.271 qpair failed and we were unable to recover it. 00:24:11.271 [2024-04-24 22:15:53.348744] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.271 [2024-04-24 22:15:53.348864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.271 [2024-04-24 22:15:53.348891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.271 [2024-04-24 22:15:53.348907] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.271 [2024-04-24 22:15:53.348919] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.271 [2024-04-24 22:15:53.348951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.271 qpair failed and we were unable to recover it. 00:24:11.271 [2024-04-24 22:15:53.358758] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.271 [2024-04-24 22:15:53.358895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.271 [2024-04-24 22:15:53.358923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.271 [2024-04-24 22:15:53.358938] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.271 [2024-04-24 22:15:53.358951] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.271 [2024-04-24 22:15:53.358983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.271 qpair failed and we were unable to recover it. 00:24:11.271 [2024-04-24 22:15:53.368793] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.271 [2024-04-24 22:15:53.368924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.271 [2024-04-24 22:15:53.368952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.271 [2024-04-24 22:15:53.368968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.271 [2024-04-24 22:15:53.368981] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.271 [2024-04-24 22:15:53.369019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.271 qpair failed and we were unable to recover it. 00:24:11.271 [2024-04-24 22:15:53.378824] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.271 [2024-04-24 22:15:53.378958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.271 [2024-04-24 22:15:53.378987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.271 [2024-04-24 22:15:53.379003] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.271 [2024-04-24 22:15:53.379016] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.271 [2024-04-24 22:15:53.379048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.271 qpair failed and we were unable to recover it. 00:24:11.271 [2024-04-24 22:15:53.388848] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.271 [2024-04-24 22:15:53.388974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.271 [2024-04-24 22:15:53.389002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.271 [2024-04-24 22:15:53.389018] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.271 [2024-04-24 22:15:53.389031] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.271 [2024-04-24 22:15:53.389062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.271 qpair failed and we were unable to recover it. 00:24:11.271 [2024-04-24 22:15:53.398877] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.271 [2024-04-24 22:15:53.399008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.271 [2024-04-24 22:15:53.399036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.271 [2024-04-24 22:15:53.399052] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.271 [2024-04-24 22:15:53.399065] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.271 [2024-04-24 22:15:53.399096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.271 qpair failed and we were unable to recover it. 00:24:11.271 [2024-04-24 22:15:53.408907] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.271 [2024-04-24 22:15:53.409036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.271 [2024-04-24 22:15:53.409064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.271 [2024-04-24 22:15:53.409080] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.272 [2024-04-24 22:15:53.409093] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.272 [2024-04-24 22:15:53.409125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.272 qpair failed and we were unable to recover it. 00:24:11.272 [2024-04-24 22:15:53.418941] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.272 [2024-04-24 22:15:53.419083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.272 [2024-04-24 22:15:53.419116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.272 [2024-04-24 22:15:53.419133] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.272 [2024-04-24 22:15:53.419146] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.272 [2024-04-24 22:15:53.419177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.272 qpair failed and we were unable to recover it. 00:24:11.272 [2024-04-24 22:15:53.428979] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.272 [2024-04-24 22:15:53.429109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.272 [2024-04-24 22:15:53.429137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.272 [2024-04-24 22:15:53.429153] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.272 [2024-04-24 22:15:53.429166] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.272 [2024-04-24 22:15:53.429198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.272 qpair failed and we were unable to recover it. 00:24:11.272 [2024-04-24 22:15:53.438979] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.272 [2024-04-24 22:15:53.439135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.272 [2024-04-24 22:15:53.439162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.272 [2024-04-24 22:15:53.439178] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.272 [2024-04-24 22:15:53.439192] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.272 [2024-04-24 22:15:53.439224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.272 qpair failed and we were unable to recover it. 00:24:11.272 [2024-04-24 22:15:53.449031] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.272 [2024-04-24 22:15:53.449210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.272 [2024-04-24 22:15:53.449238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.272 [2024-04-24 22:15:53.449254] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.272 [2024-04-24 22:15:53.449267] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.272 [2024-04-24 22:15:53.449299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.272 qpair failed and we were unable to recover it. 00:24:11.272 [2024-04-24 22:15:53.459124] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.272 [2024-04-24 22:15:53.459254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.272 [2024-04-24 22:15:53.459283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.272 [2024-04-24 22:15:53.459298] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.272 [2024-04-24 22:15:53.459311] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.272 [2024-04-24 22:15:53.459349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.272 qpair failed and we were unable to recover it. 00:24:11.272 [2024-04-24 22:15:53.469128] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.272 [2024-04-24 22:15:53.469278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.272 [2024-04-24 22:15:53.469306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.272 [2024-04-24 22:15:53.469322] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.272 [2024-04-24 22:15:53.469336] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.272 [2024-04-24 22:15:53.469368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.272 qpair failed and we were unable to recover it. 00:24:11.272 [2024-04-24 22:15:53.479100] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.272 [2024-04-24 22:15:53.479237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.272 [2024-04-24 22:15:53.479265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.272 [2024-04-24 22:15:53.479281] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.272 [2024-04-24 22:15:53.479295] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.272 [2024-04-24 22:15:53.479326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.272 qpair failed and we were unable to recover it. 00:24:11.272 [2024-04-24 22:15:53.489198] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.272 [2024-04-24 22:15:53.489325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.272 [2024-04-24 22:15:53.489353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.272 [2024-04-24 22:15:53.489369] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.272 [2024-04-24 22:15:53.489382] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.272 [2024-04-24 22:15:53.489423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.272 qpair failed and we were unable to recover it. 00:24:11.272 [2024-04-24 22:15:53.499175] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.272 [2024-04-24 22:15:53.499348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.272 [2024-04-24 22:15:53.499377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.272 [2024-04-24 22:15:53.499400] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.272 [2024-04-24 22:15:53.499416] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.272 [2024-04-24 22:15:53.499448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.272 qpair failed and we were unable to recover it. 00:24:11.272 [2024-04-24 22:15:53.509184] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.272 [2024-04-24 22:15:53.509321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.272 [2024-04-24 22:15:53.509349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.272 [2024-04-24 22:15:53.509365] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.272 [2024-04-24 22:15:53.509378] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.272 [2024-04-24 22:15:53.509418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.272 qpair failed and we were unable to recover it. 00:24:11.272 [2024-04-24 22:15:53.519244] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.272 [2024-04-24 22:15:53.519409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.272 [2024-04-24 22:15:53.519438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.272 [2024-04-24 22:15:53.519454] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.272 [2024-04-24 22:15:53.519467] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.272 [2024-04-24 22:15:53.519499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.272 qpair failed and we were unable to recover it. 00:24:11.563 [2024-04-24 22:15:53.529265] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.563 [2024-04-24 22:15:53.529404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.563 [2024-04-24 22:15:53.529431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.563 [2024-04-24 22:15:53.529447] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.563 [2024-04-24 22:15:53.529460] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.563 [2024-04-24 22:15:53.529492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.563 qpair failed and we were unable to recover it. 00:24:11.563 [2024-04-24 22:15:53.539251] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.563 [2024-04-24 22:15:53.539374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.563 [2024-04-24 22:15:53.539409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.563 [2024-04-24 22:15:53.539427] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.563 [2024-04-24 22:15:53.539440] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.563 [2024-04-24 22:15:53.539472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.563 qpair failed and we were unable to recover it. 00:24:11.563 [2024-04-24 22:15:53.549288] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.563 [2024-04-24 22:15:53.549440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.563 [2024-04-24 22:15:53.549469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.563 [2024-04-24 22:15:53.549485] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.563 [2024-04-24 22:15:53.549505] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.563 [2024-04-24 22:15:53.549539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.563 qpair failed and we were unable to recover it. 00:24:11.563 [2024-04-24 22:15:53.559324] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.563 [2024-04-24 22:15:53.559462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.563 [2024-04-24 22:15:53.559492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.563 [2024-04-24 22:15:53.559508] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.563 [2024-04-24 22:15:53.559521] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.563 [2024-04-24 22:15:53.559553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.563 qpair failed and we were unable to recover it. 00:24:11.563 [2024-04-24 22:15:53.569343] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.563 [2024-04-24 22:15:53.569481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.563 [2024-04-24 22:15:53.569510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.563 [2024-04-24 22:15:53.569526] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.563 [2024-04-24 22:15:53.569539] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.563 [2024-04-24 22:15:53.569571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.563 qpair failed and we were unable to recover it. 00:24:11.563 [2024-04-24 22:15:53.579370] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.563 [2024-04-24 22:15:53.579504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.563 [2024-04-24 22:15:53.579533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.563 [2024-04-24 22:15:53.579549] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.563 [2024-04-24 22:15:53.579562] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.563 [2024-04-24 22:15:53.579594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.563 qpair failed and we were unable to recover it. 00:24:11.563 [2024-04-24 22:15:53.589392] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.563 [2024-04-24 22:15:53.589523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.563 [2024-04-24 22:15:53.589552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.563 [2024-04-24 22:15:53.589569] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.563 [2024-04-24 22:15:53.589582] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.563 [2024-04-24 22:15:53.589615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.563 qpair failed and we were unable to recover it. 00:24:11.563 [2024-04-24 22:15:53.599459] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.563 [2024-04-24 22:15:53.599593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.563 [2024-04-24 22:15:53.599621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.563 [2024-04-24 22:15:53.599637] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.563 [2024-04-24 22:15:53.599650] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.563 [2024-04-24 22:15:53.599682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.563 qpair failed and we were unable to recover it. 00:24:11.563 [2024-04-24 22:15:53.609488] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.563 [2024-04-24 22:15:53.609617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.563 [2024-04-24 22:15:53.609645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.563 [2024-04-24 22:15:53.609661] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.563 [2024-04-24 22:15:53.609675] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.563 [2024-04-24 22:15:53.609707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.563 qpair failed and we were unable to recover it. 00:24:11.563 [2024-04-24 22:15:53.619489] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.563 [2024-04-24 22:15:53.619611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.563 [2024-04-24 22:15:53.619639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.563 [2024-04-24 22:15:53.619654] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.563 [2024-04-24 22:15:53.619667] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.563 [2024-04-24 22:15:53.619699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.563 qpair failed and we were unable to recover it. 00:24:11.563 [2024-04-24 22:15:53.629501] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.563 [2024-04-24 22:15:53.629626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.563 [2024-04-24 22:15:53.629655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.563 [2024-04-24 22:15:53.629670] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.563 [2024-04-24 22:15:53.629683] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.563 [2024-04-24 22:15:53.629715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.563 qpair failed and we were unable to recover it. 00:24:11.563 [2024-04-24 22:15:53.639578] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.563 [2024-04-24 22:15:53.639712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.564 [2024-04-24 22:15:53.639738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.564 [2024-04-24 22:15:53.639759] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.564 [2024-04-24 22:15:53.639774] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.564 [2024-04-24 22:15:53.639806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.564 qpair failed and we were unable to recover it. 00:24:11.564 [2024-04-24 22:15:53.649577] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.564 [2024-04-24 22:15:53.649718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.564 [2024-04-24 22:15:53.649747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.564 [2024-04-24 22:15:53.649763] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.564 [2024-04-24 22:15:53.649776] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.564 [2024-04-24 22:15:53.649808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.564 qpair failed and we were unable to recover it. 00:24:11.564 [2024-04-24 22:15:53.659636] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.564 [2024-04-24 22:15:53.659765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.564 [2024-04-24 22:15:53.659793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.564 [2024-04-24 22:15:53.659809] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.564 [2024-04-24 22:15:53.659822] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.564 [2024-04-24 22:15:53.659853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.564 qpair failed and we were unable to recover it. 00:24:11.564 [2024-04-24 22:15:53.669636] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.564 [2024-04-24 22:15:53.669763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.564 [2024-04-24 22:15:53.669790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.564 [2024-04-24 22:15:53.669807] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.564 [2024-04-24 22:15:53.669820] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.564 [2024-04-24 22:15:53.669852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.564 qpair failed and we were unable to recover it. 00:24:11.564 [2024-04-24 22:15:53.679692] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.564 [2024-04-24 22:15:53.679856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.564 [2024-04-24 22:15:53.679883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.564 [2024-04-24 22:15:53.679899] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.564 [2024-04-24 22:15:53.679912] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.564 [2024-04-24 22:15:53.679943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.564 qpair failed and we were unable to recover it. 00:24:11.564 [2024-04-24 22:15:53.689689] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.564 [2024-04-24 22:15:53.689814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.564 [2024-04-24 22:15:53.689842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.564 [2024-04-24 22:15:53.689858] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.564 [2024-04-24 22:15:53.689870] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.564 [2024-04-24 22:15:53.689902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.564 qpair failed and we were unable to recover it. 00:24:11.564 [2024-04-24 22:15:53.699750] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.564 [2024-04-24 22:15:53.699927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.564 [2024-04-24 22:15:53.699955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.564 [2024-04-24 22:15:53.699971] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.564 [2024-04-24 22:15:53.699984] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.564 [2024-04-24 22:15:53.700016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.564 qpair failed and we were unable to recover it. 00:24:11.564 [2024-04-24 22:15:53.709734] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.564 [2024-04-24 22:15:53.709870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.564 [2024-04-24 22:15:53.709898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.564 [2024-04-24 22:15:53.709914] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.564 [2024-04-24 22:15:53.709927] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.564 [2024-04-24 22:15:53.709958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.564 qpair failed and we were unable to recover it. 00:24:11.564 [2024-04-24 22:15:53.719810] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.564 [2024-04-24 22:15:53.719941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.564 [2024-04-24 22:15:53.719968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.564 [2024-04-24 22:15:53.719983] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.564 [2024-04-24 22:15:53.719996] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.564 [2024-04-24 22:15:53.720028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.564 qpair failed and we were unable to recover it. 00:24:11.564 [2024-04-24 22:15:53.729816] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.564 [2024-04-24 22:15:53.729945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.564 [2024-04-24 22:15:53.729973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.564 [2024-04-24 22:15:53.729997] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.564 [2024-04-24 22:15:53.730011] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.564 [2024-04-24 22:15:53.730043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.564 qpair failed and we were unable to recover it. 00:24:11.564 [2024-04-24 22:15:53.739892] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.564 [2024-04-24 22:15:53.740060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.564 [2024-04-24 22:15:53.740088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.564 [2024-04-24 22:15:53.740104] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.564 [2024-04-24 22:15:53.740122] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.564 [2024-04-24 22:15:53.740154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.564 qpair failed and we were unable to recover it. 00:24:11.564 [2024-04-24 22:15:53.749931] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.564 [2024-04-24 22:15:53.750087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.564 [2024-04-24 22:15:53.750114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.564 [2024-04-24 22:15:53.750130] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.564 [2024-04-24 22:15:53.750143] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.564 [2024-04-24 22:15:53.750175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.564 qpair failed and we were unable to recover it. 00:24:11.564 [2024-04-24 22:15:53.759935] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.564 [2024-04-24 22:15:53.760074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.564 [2024-04-24 22:15:53.760102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.564 [2024-04-24 22:15:53.760118] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.564 [2024-04-24 22:15:53.760131] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.564 [2024-04-24 22:15:53.760162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.564 qpair failed and we were unable to recover it. 00:24:11.564 [2024-04-24 22:15:53.769949] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.564 [2024-04-24 22:15:53.770077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.564 [2024-04-24 22:15:53.770105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.564 [2024-04-24 22:15:53.770121] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.564 [2024-04-24 22:15:53.770134] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.565 [2024-04-24 22:15:53.770165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.565 qpair failed and we were unable to recover it. 00:24:11.565 [2024-04-24 22:15:53.779985] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.565 [2024-04-24 22:15:53.780163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.565 [2024-04-24 22:15:53.780190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.565 [2024-04-24 22:15:53.780206] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.565 [2024-04-24 22:15:53.780219] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.565 [2024-04-24 22:15:53.780251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.565 qpair failed and we were unable to recover it. 00:24:11.565 [2024-04-24 22:15:53.789977] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.565 [2024-04-24 22:15:53.790122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.565 [2024-04-24 22:15:53.790150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.565 [2024-04-24 22:15:53.790166] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.565 [2024-04-24 22:15:53.790179] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.565 [2024-04-24 22:15:53.790210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.565 qpair failed and we were unable to recover it. 00:24:11.565 [2024-04-24 22:15:53.800060] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.565 [2024-04-24 22:15:53.800195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.565 [2024-04-24 22:15:53.800223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.565 [2024-04-24 22:15:53.800240] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.565 [2024-04-24 22:15:53.800253] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.565 [2024-04-24 22:15:53.800284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.565 qpair failed and we were unable to recover it. 00:24:11.565 [2024-04-24 22:15:53.810088] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.565 [2024-04-24 22:15:53.810237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.565 [2024-04-24 22:15:53.810266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.565 [2024-04-24 22:15:53.810281] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.565 [2024-04-24 22:15:53.810294] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.565 [2024-04-24 22:15:53.810327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.565 qpair failed and we were unable to recover it. 00:24:11.824 [2024-04-24 22:15:53.820121] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.824 [2024-04-24 22:15:53.820251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.824 [2024-04-24 22:15:53.820285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.824 [2024-04-24 22:15:53.820301] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.824 [2024-04-24 22:15:53.820314] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.824 [2024-04-24 22:15:53.820346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.824 qpair failed and we were unable to recover it. 00:24:11.824 [2024-04-24 22:15:53.830119] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.824 [2024-04-24 22:15:53.830244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.824 [2024-04-24 22:15:53.830273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.824 [2024-04-24 22:15:53.830289] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.824 [2024-04-24 22:15:53.830302] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.824 [2024-04-24 22:15:53.830333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.824 qpair failed and we were unable to recover it. 00:24:11.824 [2024-04-24 22:15:53.840168] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.824 [2024-04-24 22:15:53.840299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.824 [2024-04-24 22:15:53.840328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.824 [2024-04-24 22:15:53.840343] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.824 [2024-04-24 22:15:53.840357] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.824 [2024-04-24 22:15:53.840388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.824 qpair failed and we were unable to recover it. 00:24:11.824 [2024-04-24 22:15:53.850175] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.824 [2024-04-24 22:15:53.850300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.824 [2024-04-24 22:15:53.850328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.824 [2024-04-24 22:15:53.850344] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.824 [2024-04-24 22:15:53.850358] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.824 [2024-04-24 22:15:53.850410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.824 qpair failed and we were unable to recover it. 00:24:11.824 [2024-04-24 22:15:53.860220] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.824 [2024-04-24 22:15:53.860347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.824 [2024-04-24 22:15:53.860376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.824 [2024-04-24 22:15:53.860391] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.824 [2024-04-24 22:15:53.860414] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.824 [2024-04-24 22:15:53.860453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.824 qpair failed and we were unable to recover it. 00:24:11.824 [2024-04-24 22:15:53.870327] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.824 [2024-04-24 22:15:53.870487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.824 [2024-04-24 22:15:53.870515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.824 [2024-04-24 22:15:53.870532] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.824 [2024-04-24 22:15:53.870545] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.824 [2024-04-24 22:15:53.870577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.824 qpair failed and we were unable to recover it. 00:24:11.824 [2024-04-24 22:15:53.880309] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.824 [2024-04-24 22:15:53.880466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.824 [2024-04-24 22:15:53.880493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.824 [2024-04-24 22:15:53.880509] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.824 [2024-04-24 22:15:53.880522] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.824 [2024-04-24 22:15:53.880554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.824 qpair failed and we were unable to recover it. 00:24:11.824 [2024-04-24 22:15:53.890292] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.824 [2024-04-24 22:15:53.890428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.824 [2024-04-24 22:15:53.890456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.824 [2024-04-24 22:15:53.890472] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.825 [2024-04-24 22:15:53.890484] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.825 [2024-04-24 22:15:53.890516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.825 qpair failed and we were unable to recover it. 00:24:11.825 [2024-04-24 22:15:53.900311] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.825 [2024-04-24 22:15:53.900448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.825 [2024-04-24 22:15:53.900476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.825 [2024-04-24 22:15:53.900492] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.825 [2024-04-24 22:15:53.900506] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.825 [2024-04-24 22:15:53.900537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.825 qpair failed and we were unable to recover it. 00:24:11.825 [2024-04-24 22:15:53.910569] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.825 [2024-04-24 22:15:53.910724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.825 [2024-04-24 22:15:53.910761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.825 [2024-04-24 22:15:53.910777] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.825 [2024-04-24 22:15:53.910791] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.825 [2024-04-24 22:15:53.910822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.825 qpair failed and we were unable to recover it. 00:24:11.825 [2024-04-24 22:15:53.920481] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.825 [2024-04-24 22:15:53.920616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.825 [2024-04-24 22:15:53.920643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.825 [2024-04-24 22:15:53.920659] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.825 [2024-04-24 22:15:53.920672] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.825 [2024-04-24 22:15:53.920704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.825 qpair failed and we were unable to recover it. 00:24:11.825 [2024-04-24 22:15:53.930486] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.825 [2024-04-24 22:15:53.930619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.825 [2024-04-24 22:15:53.930647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.825 [2024-04-24 22:15:53.930663] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.825 [2024-04-24 22:15:53.930685] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.825 [2024-04-24 22:15:53.930717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.825 qpair failed and we were unable to recover it. 00:24:11.825 [2024-04-24 22:15:53.940560] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.825 [2024-04-24 22:15:53.940685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.825 [2024-04-24 22:15:53.940713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.825 [2024-04-24 22:15:53.940729] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.825 [2024-04-24 22:15:53.940742] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.825 [2024-04-24 22:15:53.940774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.825 qpair failed and we were unable to recover it. 00:24:11.825 [2024-04-24 22:15:53.950459] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.825 [2024-04-24 22:15:53.950597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.825 [2024-04-24 22:15:53.950625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.825 [2024-04-24 22:15:53.950641] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.825 [2024-04-24 22:15:53.950660] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.825 [2024-04-24 22:15:53.950692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.825 qpair failed and we were unable to recover it. 00:24:11.825 [2024-04-24 22:15:53.960510] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.825 [2024-04-24 22:15:53.960695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.825 [2024-04-24 22:15:53.960723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.825 [2024-04-24 22:15:53.960740] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.825 [2024-04-24 22:15:53.960753] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.825 [2024-04-24 22:15:53.960785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.825 qpair failed and we were unable to recover it. 00:24:11.825 [2024-04-24 22:15:53.970550] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.825 [2024-04-24 22:15:53.970690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.825 [2024-04-24 22:15:53.970719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.825 [2024-04-24 22:15:53.970735] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.825 [2024-04-24 22:15:53.970748] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.825 [2024-04-24 22:15:53.970780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.825 qpair failed and we were unable to recover it. 00:24:11.825 [2024-04-24 22:15:53.980562] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.825 [2024-04-24 22:15:53.980696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.825 [2024-04-24 22:15:53.980724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.825 [2024-04-24 22:15:53.980740] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.825 [2024-04-24 22:15:53.980753] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.825 [2024-04-24 22:15:53.980784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.825 qpair failed and we were unable to recover it. 00:24:11.825 [2024-04-24 22:15:53.990602] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.825 [2024-04-24 22:15:53.990737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.825 [2024-04-24 22:15:53.990764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.825 [2024-04-24 22:15:53.990780] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.825 [2024-04-24 22:15:53.990793] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.825 [2024-04-24 22:15:53.990825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.825 qpair failed and we were unable to recover it. 00:24:11.825 [2024-04-24 22:15:54.000627] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.825 [2024-04-24 22:15:54.000764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.825 [2024-04-24 22:15:54.000792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.825 [2024-04-24 22:15:54.000807] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.825 [2024-04-24 22:15:54.000820] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.825 [2024-04-24 22:15:54.000853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.825 qpair failed and we were unable to recover it. 00:24:11.825 [2024-04-24 22:15:54.010676] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.825 [2024-04-24 22:15:54.010826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.825 [2024-04-24 22:15:54.010864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.825 [2024-04-24 22:15:54.010880] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.825 [2024-04-24 22:15:54.010893] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.825 [2024-04-24 22:15:54.010926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.825 qpair failed and we were unable to recover it. 00:24:11.825 [2024-04-24 22:15:54.020667] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.825 [2024-04-24 22:15:54.020803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.825 [2024-04-24 22:15:54.020831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.825 [2024-04-24 22:15:54.020847] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.825 [2024-04-24 22:15:54.020860] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.825 [2024-04-24 22:15:54.020892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.826 qpair failed and we were unable to recover it. 00:24:11.826 [2024-04-24 22:15:54.030679] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.826 [2024-04-24 22:15:54.030810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.826 [2024-04-24 22:15:54.030838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.826 [2024-04-24 22:15:54.030854] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.826 [2024-04-24 22:15:54.030867] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.826 [2024-04-24 22:15:54.030899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.826 qpair failed and we were unable to recover it. 00:24:11.826 [2024-04-24 22:15:54.040803] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.826 [2024-04-24 22:15:54.040935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.826 [2024-04-24 22:15:54.040962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.826 [2024-04-24 22:15:54.040985] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.826 [2024-04-24 22:15:54.041000] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.826 [2024-04-24 22:15:54.041032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.826 qpair failed and we were unable to recover it. 00:24:11.826 [2024-04-24 22:15:54.050790] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.826 [2024-04-24 22:15:54.050943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.826 [2024-04-24 22:15:54.050971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.826 [2024-04-24 22:15:54.050987] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.826 [2024-04-24 22:15:54.051000] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.826 [2024-04-24 22:15:54.051031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.826 qpair failed and we were unable to recover it. 00:24:11.826 [2024-04-24 22:15:54.060764] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.826 [2024-04-24 22:15:54.060888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.826 [2024-04-24 22:15:54.060916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.826 [2024-04-24 22:15:54.060932] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.826 [2024-04-24 22:15:54.060945] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.826 [2024-04-24 22:15:54.060977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.826 qpair failed and we were unable to recover it. 00:24:11.826 [2024-04-24 22:15:54.070786] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.826 [2024-04-24 22:15:54.070916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.826 [2024-04-24 22:15:54.070944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.826 [2024-04-24 22:15:54.070959] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.826 [2024-04-24 22:15:54.070973] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:11.826 [2024-04-24 22:15:54.071004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:11.826 qpair failed and we were unable to recover it. 00:24:12.085 [2024-04-24 22:15:54.080846] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.086 [2024-04-24 22:15:54.080975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.086 [2024-04-24 22:15:54.081003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.086 [2024-04-24 22:15:54.081018] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.086 [2024-04-24 22:15:54.081031] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.086 [2024-04-24 22:15:54.081063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.086 qpair failed and we were unable to recover it. 00:24:12.086 [2024-04-24 22:15:54.090934] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.086 [2024-04-24 22:15:54.091063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.086 [2024-04-24 22:15:54.091090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.086 [2024-04-24 22:15:54.091106] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.086 [2024-04-24 22:15:54.091126] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.086 [2024-04-24 22:15:54.091158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.086 qpair failed and we were unable to recover it. 00:24:12.086 [2024-04-24 22:15:54.100930] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.086 [2024-04-24 22:15:54.101107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.086 [2024-04-24 22:15:54.101135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.086 [2024-04-24 22:15:54.101152] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.086 [2024-04-24 22:15:54.101165] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.086 [2024-04-24 22:15:54.101196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.086 qpair failed and we were unable to recover it. 00:24:12.086 [2024-04-24 22:15:54.110919] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.086 [2024-04-24 22:15:54.111044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.086 [2024-04-24 22:15:54.111073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.086 [2024-04-24 22:15:54.111089] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.086 [2024-04-24 22:15:54.111102] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.086 [2024-04-24 22:15:54.111133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.086 qpair failed and we were unable to recover it. 00:24:12.086 [2024-04-24 22:15:54.120937] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.086 [2024-04-24 22:15:54.121072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.086 [2024-04-24 22:15:54.121100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.086 [2024-04-24 22:15:54.121116] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.086 [2024-04-24 22:15:54.121129] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.086 [2024-04-24 22:15:54.121161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.086 qpair failed and we were unable to recover it. 00:24:12.086 [2024-04-24 22:15:54.131017] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.086 [2024-04-24 22:15:54.131170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.086 [2024-04-24 22:15:54.131198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.086 [2024-04-24 22:15:54.131220] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.086 [2024-04-24 22:15:54.131234] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.086 [2024-04-24 22:15:54.131267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.086 qpair failed and we were unable to recover it. 00:24:12.086 [2024-04-24 22:15:54.141012] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.086 [2024-04-24 22:15:54.141147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.086 [2024-04-24 22:15:54.141175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.086 [2024-04-24 22:15:54.141191] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.086 [2024-04-24 22:15:54.141204] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.086 [2024-04-24 22:15:54.141236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.086 qpair failed and we were unable to recover it. 00:24:12.086 [2024-04-24 22:15:54.151039] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.086 [2024-04-24 22:15:54.151165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.086 [2024-04-24 22:15:54.151193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.086 [2024-04-24 22:15:54.151208] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.086 [2024-04-24 22:15:54.151221] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.086 [2024-04-24 22:15:54.151253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.086 qpair failed and we were unable to recover it. 00:24:12.086 [2024-04-24 22:15:54.161085] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.086 [2024-04-24 22:15:54.161229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.086 [2024-04-24 22:15:54.161258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.086 [2024-04-24 22:15:54.161273] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.086 [2024-04-24 22:15:54.161286] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.086 [2024-04-24 22:15:54.161317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.086 qpair failed and we were unable to recover it. 00:24:12.086 [2024-04-24 22:15:54.171121] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.086 [2024-04-24 22:15:54.171270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.086 [2024-04-24 22:15:54.171297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.086 [2024-04-24 22:15:54.171313] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.086 [2024-04-24 22:15:54.171326] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.086 [2024-04-24 22:15:54.171357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.086 qpair failed and we were unable to recover it. 00:24:12.086 [2024-04-24 22:15:54.181118] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.086 [2024-04-24 22:15:54.181252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.086 [2024-04-24 22:15:54.181281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.086 [2024-04-24 22:15:54.181296] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.086 [2024-04-24 22:15:54.181309] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.086 [2024-04-24 22:15:54.181342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.086 qpair failed and we were unable to recover it. 00:24:12.086 [2024-04-24 22:15:54.191140] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.086 [2024-04-24 22:15:54.191264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.086 [2024-04-24 22:15:54.191292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.086 [2024-04-24 22:15:54.191308] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.086 [2024-04-24 22:15:54.191321] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.086 [2024-04-24 22:15:54.191352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.086 qpair failed and we were unable to recover it. 00:24:12.086 [2024-04-24 22:15:54.201183] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.086 [2024-04-24 22:15:54.201341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.086 [2024-04-24 22:15:54.201369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.086 [2024-04-24 22:15:54.201385] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.086 [2024-04-24 22:15:54.201407] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.086 [2024-04-24 22:15:54.201441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.086 qpair failed and we were unable to recover it. 00:24:12.086 [2024-04-24 22:15:54.211237] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.086 [2024-04-24 22:15:54.211369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.086 [2024-04-24 22:15:54.211414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.087 [2024-04-24 22:15:54.211430] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.087 [2024-04-24 22:15:54.211443] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.087 [2024-04-24 22:15:54.211475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.087 qpair failed and we were unable to recover it. 00:24:12.087 [2024-04-24 22:15:54.221226] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.087 [2024-04-24 22:15:54.221385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.087 [2024-04-24 22:15:54.221426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.087 [2024-04-24 22:15:54.221443] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.087 [2024-04-24 22:15:54.221456] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.087 [2024-04-24 22:15:54.221489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.087 qpair failed and we were unable to recover it. 00:24:12.087 [2024-04-24 22:15:54.231278] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.087 [2024-04-24 22:15:54.231448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.087 [2024-04-24 22:15:54.231476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.087 [2024-04-24 22:15:54.231492] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.087 [2024-04-24 22:15:54.231505] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.087 [2024-04-24 22:15:54.231538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.087 qpair failed and we were unable to recover it. 00:24:12.087 [2024-04-24 22:15:54.241327] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.087 [2024-04-24 22:15:54.241469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.087 [2024-04-24 22:15:54.241498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.087 [2024-04-24 22:15:54.241514] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.087 [2024-04-24 22:15:54.241527] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.087 [2024-04-24 22:15:54.241559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.087 qpair failed and we were unable to recover it. 00:24:12.087 [2024-04-24 22:15:54.251443] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.087 [2024-04-24 22:15:54.251621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.087 [2024-04-24 22:15:54.251652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.087 [2024-04-24 22:15:54.251668] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.087 [2024-04-24 22:15:54.251680] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.087 [2024-04-24 22:15:54.251712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.087 qpair failed and we were unable to recover it. 00:24:12.087 [2024-04-24 22:15:54.261345] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.087 [2024-04-24 22:15:54.261480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.087 [2024-04-24 22:15:54.261509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.087 [2024-04-24 22:15:54.261524] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.087 [2024-04-24 22:15:54.261538] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.087 [2024-04-24 22:15:54.261576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.087 qpair failed and we were unable to recover it. 00:24:12.087 [2024-04-24 22:15:54.271365] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.087 [2024-04-24 22:15:54.271499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.087 [2024-04-24 22:15:54.271528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.087 [2024-04-24 22:15:54.271544] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.087 [2024-04-24 22:15:54.271558] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.087 [2024-04-24 22:15:54.271590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.087 qpair failed and we were unable to recover it. 00:24:12.087 [2024-04-24 22:15:54.281422] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.087 [2024-04-24 22:15:54.281561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.087 [2024-04-24 22:15:54.281588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.087 [2024-04-24 22:15:54.281605] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.087 [2024-04-24 22:15:54.281618] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.087 [2024-04-24 22:15:54.281651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.087 qpair failed and we were unable to recover it. 00:24:12.087 [2024-04-24 22:15:54.291442] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.087 [2024-04-24 22:15:54.291602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.087 [2024-04-24 22:15:54.291630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.087 [2024-04-24 22:15:54.291646] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.087 [2024-04-24 22:15:54.291659] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.087 [2024-04-24 22:15:54.291693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.087 qpair failed and we were unable to recover it. 00:24:12.087 [2024-04-24 22:15:54.301496] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.087 [2024-04-24 22:15:54.301645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.087 [2024-04-24 22:15:54.301673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.087 [2024-04-24 22:15:54.301696] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.087 [2024-04-24 22:15:54.301709] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.087 [2024-04-24 22:15:54.301740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.087 qpair failed and we were unable to recover it. 00:24:12.087 [2024-04-24 22:15:54.311516] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.087 [2024-04-24 22:15:54.311648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.087 [2024-04-24 22:15:54.311683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.087 [2024-04-24 22:15:54.311699] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.087 [2024-04-24 22:15:54.311713] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.087 [2024-04-24 22:15:54.311744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.087 qpair failed and we were unable to recover it. 00:24:12.087 [2024-04-24 22:15:54.321559] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.087 [2024-04-24 22:15:54.321693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.087 [2024-04-24 22:15:54.321721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.087 [2024-04-24 22:15:54.321737] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.087 [2024-04-24 22:15:54.321751] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.087 [2024-04-24 22:15:54.321783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.087 qpair failed and we were unable to recover it. 00:24:12.087 [2024-04-24 22:15:54.331572] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.087 [2024-04-24 22:15:54.331736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.087 [2024-04-24 22:15:54.331764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.087 [2024-04-24 22:15:54.331780] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.087 [2024-04-24 22:15:54.331793] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.087 [2024-04-24 22:15:54.331824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.087 qpair failed and we were unable to recover it. 00:24:12.348 [2024-04-24 22:15:54.341615] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.348 [2024-04-24 22:15:54.341763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.348 [2024-04-24 22:15:54.341792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.348 [2024-04-24 22:15:54.341808] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.348 [2024-04-24 22:15:54.341821] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.348 [2024-04-24 22:15:54.341853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.348 qpair failed and we were unable to recover it. 00:24:12.348 [2024-04-24 22:15:54.351629] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.348 [2024-04-24 22:15:54.351833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.348 [2024-04-24 22:15:54.351872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.348 [2024-04-24 22:15:54.351888] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.348 [2024-04-24 22:15:54.351908] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.348 [2024-04-24 22:15:54.351940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.348 qpair failed and we were unable to recover it. 00:24:12.348 [2024-04-24 22:15:54.361660] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.348 [2024-04-24 22:15:54.361790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.348 [2024-04-24 22:15:54.361817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.348 [2024-04-24 22:15:54.361834] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.348 [2024-04-24 22:15:54.361847] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.348 [2024-04-24 22:15:54.361879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.348 qpair failed and we were unable to recover it. 00:24:12.348 [2024-04-24 22:15:54.371700] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.348 [2024-04-24 22:15:54.371831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.348 [2024-04-24 22:15:54.371859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.348 [2024-04-24 22:15:54.371875] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.348 [2024-04-24 22:15:54.371887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.348 [2024-04-24 22:15:54.371919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.348 qpair failed and we were unable to recover it. 00:24:12.348 [2024-04-24 22:15:54.381704] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.348 [2024-04-24 22:15:54.381829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.348 [2024-04-24 22:15:54.381857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.348 [2024-04-24 22:15:54.381873] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.348 [2024-04-24 22:15:54.381887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.348 [2024-04-24 22:15:54.381919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.348 qpair failed and we were unable to recover it. 00:24:12.348 [2024-04-24 22:15:54.391770] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.348 [2024-04-24 22:15:54.391907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.348 [2024-04-24 22:15:54.391935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.348 [2024-04-24 22:15:54.391951] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.348 [2024-04-24 22:15:54.391964] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.348 [2024-04-24 22:15:54.391996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.348 qpair failed and we were unable to recover it. 00:24:12.348 [2024-04-24 22:15:54.401817] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.348 [2024-04-24 22:15:54.402006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.349 [2024-04-24 22:15:54.402033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.349 [2024-04-24 22:15:54.402049] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.349 [2024-04-24 22:15:54.402062] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.349 [2024-04-24 22:15:54.402093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.349 qpair failed and we were unable to recover it. 00:24:12.349 [2024-04-24 22:15:54.411808] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.349 [2024-04-24 22:15:54.411937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.349 [2024-04-24 22:15:54.411965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.349 [2024-04-24 22:15:54.411980] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.349 [2024-04-24 22:15:54.411994] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.349 [2024-04-24 22:15:54.412025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.349 qpair failed and we were unable to recover it. 00:24:12.349 [2024-04-24 22:15:54.421842] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.349 [2024-04-24 22:15:54.421967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.349 [2024-04-24 22:15:54.421996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.349 [2024-04-24 22:15:54.422012] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.349 [2024-04-24 22:15:54.422025] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.349 [2024-04-24 22:15:54.422057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.349 qpair failed and we were unable to recover it. 00:24:12.349 [2024-04-24 22:15:54.431877] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.349 [2024-04-24 22:15:54.432003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.349 [2024-04-24 22:15:54.432032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.349 [2024-04-24 22:15:54.432048] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.349 [2024-04-24 22:15:54.432061] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.349 [2024-04-24 22:15:54.432092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.349 qpair failed and we were unable to recover it. 00:24:12.349 [2024-04-24 22:15:54.441914] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.349 [2024-04-24 22:15:54.442051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.349 [2024-04-24 22:15:54.442078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.349 [2024-04-24 22:15:54.442096] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.349 [2024-04-24 22:15:54.442115] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.349 [2024-04-24 22:15:54.442148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.349 qpair failed and we were unable to recover it. 00:24:12.349 [2024-04-24 22:15:54.451956] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.349 [2024-04-24 22:15:54.452085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.349 [2024-04-24 22:15:54.452114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.349 [2024-04-24 22:15:54.452130] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.349 [2024-04-24 22:15:54.452142] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.349 [2024-04-24 22:15:54.452174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.349 qpair failed and we were unable to recover it. 00:24:12.349 [2024-04-24 22:15:54.461961] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.349 [2024-04-24 22:15:54.462087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.349 [2024-04-24 22:15:54.462115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.349 [2024-04-24 22:15:54.462131] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.349 [2024-04-24 22:15:54.462149] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.349 [2024-04-24 22:15:54.462181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.349 qpair failed and we were unable to recover it. 00:24:12.349 [2024-04-24 22:15:54.471975] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.349 [2024-04-24 22:15:54.472101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.349 [2024-04-24 22:15:54.472129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.349 [2024-04-24 22:15:54.472145] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.349 [2024-04-24 22:15:54.472158] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.349 [2024-04-24 22:15:54.472189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.349 qpair failed and we were unable to recover it. 00:24:12.349 [2024-04-24 22:15:54.482054] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.349 [2024-04-24 22:15:54.482189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.349 [2024-04-24 22:15:54.482217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.349 [2024-04-24 22:15:54.482233] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.349 [2024-04-24 22:15:54.482247] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.349 [2024-04-24 22:15:54.482278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.349 qpair failed and we were unable to recover it. 00:24:12.349 [2024-04-24 22:15:54.492083] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.349 [2024-04-24 22:15:54.492216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.349 [2024-04-24 22:15:54.492254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.349 [2024-04-24 22:15:54.492269] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.349 [2024-04-24 22:15:54.492282] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.349 [2024-04-24 22:15:54.492315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.349 qpair failed and we were unable to recover it. 00:24:12.349 [2024-04-24 22:15:54.502121] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.349 [2024-04-24 22:15:54.502257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.349 [2024-04-24 22:15:54.502285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.349 [2024-04-24 22:15:54.502301] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.349 [2024-04-24 22:15:54.502314] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.349 [2024-04-24 22:15:54.502346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.349 qpair failed and we were unable to recover it. 00:24:12.349 [2024-04-24 22:15:54.512114] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.349 [2024-04-24 22:15:54.512239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.349 [2024-04-24 22:15:54.512266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.349 [2024-04-24 22:15:54.512282] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.349 [2024-04-24 22:15:54.512295] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.349 [2024-04-24 22:15:54.512327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.349 qpair failed and we were unable to recover it. 00:24:12.349 [2024-04-24 22:15:54.522163] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.349 [2024-04-24 22:15:54.522304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.349 [2024-04-24 22:15:54.522331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.349 [2024-04-24 22:15:54.522347] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.349 [2024-04-24 22:15:54.522360] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.349 [2024-04-24 22:15:54.522391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.349 qpair failed and we were unable to recover it. 00:24:12.349 [2024-04-24 22:15:54.532181] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.349 [2024-04-24 22:15:54.532355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.350 [2024-04-24 22:15:54.532383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.350 [2024-04-24 22:15:54.532415] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.350 [2024-04-24 22:15:54.532431] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.350 [2024-04-24 22:15:54.532464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.350 qpair failed and we were unable to recover it. 00:24:12.350 [2024-04-24 22:15:54.542220] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.350 [2024-04-24 22:15:54.542369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.350 [2024-04-24 22:15:54.542403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.350 [2024-04-24 22:15:54.542421] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.350 [2024-04-24 22:15:54.542434] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.350 [2024-04-24 22:15:54.542466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.350 qpair failed and we were unable to recover it. 00:24:12.350 [2024-04-24 22:15:54.552228] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.350 [2024-04-24 22:15:54.552356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.350 [2024-04-24 22:15:54.552384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.350 [2024-04-24 22:15:54.552409] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.350 [2024-04-24 22:15:54.552423] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.350 [2024-04-24 22:15:54.552455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.350 qpair failed and we were unable to recover it. 00:24:12.350 [2024-04-24 22:15:54.562288] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.350 [2024-04-24 22:15:54.562441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.350 [2024-04-24 22:15:54.562469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.350 [2024-04-24 22:15:54.562485] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.350 [2024-04-24 22:15:54.562498] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.350 [2024-04-24 22:15:54.562529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.350 qpair failed and we were unable to recover it. 00:24:12.350 [2024-04-24 22:15:54.572303] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.350 [2024-04-24 22:15:54.572442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.350 [2024-04-24 22:15:54.572471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.350 [2024-04-24 22:15:54.572487] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.350 [2024-04-24 22:15:54.572500] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.350 [2024-04-24 22:15:54.572532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.350 qpair failed and we were unable to recover it. 00:24:12.350 [2024-04-24 22:15:54.582338] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.350 [2024-04-24 22:15:54.582472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.350 [2024-04-24 22:15:54.582501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.350 [2024-04-24 22:15:54.582517] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.350 [2024-04-24 22:15:54.582530] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.350 [2024-04-24 22:15:54.582562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.350 qpair failed and we were unable to recover it. 00:24:12.350 [2024-04-24 22:15:54.592420] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.350 [2024-04-24 22:15:54.592549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.350 [2024-04-24 22:15:54.592589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.350 [2024-04-24 22:15:54.592605] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.350 [2024-04-24 22:15:54.592618] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.350 [2024-04-24 22:15:54.592650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.350 qpair failed and we were unable to recover it. 00:24:12.611 [2024-04-24 22:15:54.602431] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.611 [2024-04-24 22:15:54.602580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.611 [2024-04-24 22:15:54.602616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.611 [2024-04-24 22:15:54.602640] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.611 [2024-04-24 22:15:54.602655] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.611 [2024-04-24 22:15:54.602694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.611 qpair failed and we were unable to recover it. 00:24:12.611 [2024-04-24 22:15:54.612414] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.611 [2024-04-24 22:15:54.612546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.611 [2024-04-24 22:15:54.612574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.611 [2024-04-24 22:15:54.612590] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.611 [2024-04-24 22:15:54.612603] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.611 [2024-04-24 22:15:54.612636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.611 qpair failed and we were unable to recover it. 00:24:12.611 [2024-04-24 22:15:54.622451] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.611 [2024-04-24 22:15:54.622579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.611 [2024-04-24 22:15:54.622613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.611 [2024-04-24 22:15:54.622629] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.611 [2024-04-24 22:15:54.622643] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.611 [2024-04-24 22:15:54.622674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.611 qpair failed and we were unable to recover it. 00:24:12.611 [2024-04-24 22:15:54.632462] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.611 [2024-04-24 22:15:54.632587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.611 [2024-04-24 22:15:54.632614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.611 [2024-04-24 22:15:54.632631] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.611 [2024-04-24 22:15:54.632644] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.611 [2024-04-24 22:15:54.632676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.611 qpair failed and we were unable to recover it. 00:24:12.611 [2024-04-24 22:15:54.642504] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.611 [2024-04-24 22:15:54.642641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.611 [2024-04-24 22:15:54.642667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.611 [2024-04-24 22:15:54.642682] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.611 [2024-04-24 22:15:54.642695] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.611 [2024-04-24 22:15:54.642727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.611 qpair failed and we were unable to recover it. 00:24:12.611 [2024-04-24 22:15:54.652556] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.611 [2024-04-24 22:15:54.652683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.611 [2024-04-24 22:15:54.652712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.611 [2024-04-24 22:15:54.652728] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.611 [2024-04-24 22:15:54.652741] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.611 [2024-04-24 22:15:54.652784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.611 qpair failed and we were unable to recover it. 00:24:12.611 [2024-04-24 22:15:54.662542] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.611 [2024-04-24 22:15:54.662710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.611 [2024-04-24 22:15:54.662738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.611 [2024-04-24 22:15:54.662754] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.611 [2024-04-24 22:15:54.662767] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.611 [2024-04-24 22:15:54.662808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.611 qpair failed and we were unable to recover it. 00:24:12.611 [2024-04-24 22:15:54.672566] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.611 [2024-04-24 22:15:54.672693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.611 [2024-04-24 22:15:54.672721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.611 [2024-04-24 22:15:54.672737] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.611 [2024-04-24 22:15:54.672750] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.611 [2024-04-24 22:15:54.672782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.611 qpair failed and we were unable to recover it. 00:24:12.611 [2024-04-24 22:15:54.682642] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.611 [2024-04-24 22:15:54.682784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.611 [2024-04-24 22:15:54.682811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.611 [2024-04-24 22:15:54.682828] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.611 [2024-04-24 22:15:54.682841] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.611 [2024-04-24 22:15:54.682873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.611 qpair failed and we were unable to recover it. 00:24:12.611 [2024-04-24 22:15:54.692654] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.611 [2024-04-24 22:15:54.692783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.611 [2024-04-24 22:15:54.692811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.611 [2024-04-24 22:15:54.692827] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.611 [2024-04-24 22:15:54.692840] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.611 [2024-04-24 22:15:54.692871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.611 qpair failed and we were unable to recover it. 00:24:12.611 [2024-04-24 22:15:54.702660] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.611 [2024-04-24 22:15:54.702789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.611 [2024-04-24 22:15:54.702816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.611 [2024-04-24 22:15:54.702832] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.611 [2024-04-24 22:15:54.702845] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.611 [2024-04-24 22:15:54.702877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.611 qpair failed and we were unable to recover it. 00:24:12.611 [2024-04-24 22:15:54.712706] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.611 [2024-04-24 22:15:54.712874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.611 [2024-04-24 22:15:54.712907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.611 [2024-04-24 22:15:54.712924] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.611 [2024-04-24 22:15:54.712937] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.611 [2024-04-24 22:15:54.712968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.611 qpair failed and we were unable to recover it. 00:24:12.611 [2024-04-24 22:15:54.722757] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.611 [2024-04-24 22:15:54.722898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.611 [2024-04-24 22:15:54.722927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.611 [2024-04-24 22:15:54.722942] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.611 [2024-04-24 22:15:54.722955] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.612 [2024-04-24 22:15:54.722986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.612 qpair failed and we were unable to recover it. 00:24:12.612 [2024-04-24 22:15:54.732749] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.612 [2024-04-24 22:15:54.732933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.612 [2024-04-24 22:15:54.732961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.612 [2024-04-24 22:15:54.732977] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.612 [2024-04-24 22:15:54.732990] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.612 [2024-04-24 22:15:54.733022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.612 qpair failed and we were unable to recover it. 00:24:12.612 [2024-04-24 22:15:54.742808] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.612 [2024-04-24 22:15:54.742958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.612 [2024-04-24 22:15:54.742986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.612 [2024-04-24 22:15:54.743002] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.612 [2024-04-24 22:15:54.743015] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.612 [2024-04-24 22:15:54.743047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.612 qpair failed and we were unable to recover it. 00:24:12.612 [2024-04-24 22:15:54.752808] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.612 [2024-04-24 22:15:54.752934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.612 [2024-04-24 22:15:54.752961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.612 [2024-04-24 22:15:54.752977] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.612 [2024-04-24 22:15:54.752996] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.612 [2024-04-24 22:15:54.753029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.612 qpair failed and we were unable to recover it. 00:24:12.612 [2024-04-24 22:15:54.762842] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.612 [2024-04-24 22:15:54.762973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.612 [2024-04-24 22:15:54.763001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.612 [2024-04-24 22:15:54.763017] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.612 [2024-04-24 22:15:54.763030] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.612 [2024-04-24 22:15:54.763063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.612 qpair failed and we were unable to recover it. 00:24:12.612 [2024-04-24 22:15:54.772890] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.612 [2024-04-24 22:15:54.773050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.612 [2024-04-24 22:15:54.773077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.612 [2024-04-24 22:15:54.773093] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.612 [2024-04-24 22:15:54.773106] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.612 [2024-04-24 22:15:54.773138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.612 qpair failed and we were unable to recover it. 00:24:12.612 [2024-04-24 22:15:54.782945] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.612 [2024-04-24 22:15:54.783077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.612 [2024-04-24 22:15:54.783105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.612 [2024-04-24 22:15:54.783121] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.612 [2024-04-24 22:15:54.783134] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.612 [2024-04-24 22:15:54.783166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.612 qpair failed and we were unable to recover it. 00:24:12.612 [2024-04-24 22:15:54.792907] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.612 [2024-04-24 22:15:54.793029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.612 [2024-04-24 22:15:54.793057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.612 [2024-04-24 22:15:54.793073] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.612 [2024-04-24 22:15:54.793086] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.612 [2024-04-24 22:15:54.793117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.612 qpair failed and we were unable to recover it. 00:24:12.612 [2024-04-24 22:15:54.802942] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.612 [2024-04-24 22:15:54.803131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.612 [2024-04-24 22:15:54.803159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.612 [2024-04-24 22:15:54.803175] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.612 [2024-04-24 22:15:54.803188] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.612 [2024-04-24 22:15:54.803219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.612 qpair failed and we were unable to recover it. 00:24:12.612 [2024-04-24 22:15:54.812999] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.612 [2024-04-24 22:15:54.813128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.612 [2024-04-24 22:15:54.813156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.612 [2024-04-24 22:15:54.813172] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.612 [2024-04-24 22:15:54.813185] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.612 [2024-04-24 22:15:54.813216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.612 qpair failed and we were unable to recover it. 00:24:12.612 [2024-04-24 22:15:54.823074] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.612 [2024-04-24 22:15:54.823194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.612 [2024-04-24 22:15:54.823222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.612 [2024-04-24 22:15:54.823238] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.612 [2024-04-24 22:15:54.823251] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.612 [2024-04-24 22:15:54.823282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.612 qpair failed and we were unable to recover it. 00:24:12.612 [2024-04-24 22:15:54.833034] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.612 [2024-04-24 22:15:54.833180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.612 [2024-04-24 22:15:54.833208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.612 [2024-04-24 22:15:54.833224] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.612 [2024-04-24 22:15:54.833238] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.612 [2024-04-24 22:15:54.833269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.612 qpair failed and we were unable to recover it. 00:24:12.612 [2024-04-24 22:15:54.843058] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.612 [2024-04-24 22:15:54.843200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.612 [2024-04-24 22:15:54.843228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.612 [2024-04-24 22:15:54.843244] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.612 [2024-04-24 22:15:54.843264] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.612 [2024-04-24 22:15:54.843297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.612 qpair failed and we were unable to recover it. 00:24:12.612 [2024-04-24 22:15:54.853079] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.612 [2024-04-24 22:15:54.853205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.612 [2024-04-24 22:15:54.853233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.612 [2024-04-24 22:15:54.853249] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.612 [2024-04-24 22:15:54.853262] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.612 [2024-04-24 22:15:54.853294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.612 qpair failed and we were unable to recover it. 00:24:12.613 [2024-04-24 22:15:54.863103] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.613 [2024-04-24 22:15:54.863225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.613 [2024-04-24 22:15:54.863253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.613 [2024-04-24 22:15:54.863270] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.613 [2024-04-24 22:15:54.863283] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.613 [2024-04-24 22:15:54.863315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.613 qpair failed and we were unable to recover it. 00:24:12.872 [2024-04-24 22:15:54.873122] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.872 [2024-04-24 22:15:54.873257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.873 [2024-04-24 22:15:54.873285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.873 [2024-04-24 22:15:54.873301] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.873 [2024-04-24 22:15:54.873314] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.873 [2024-04-24 22:15:54.873346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.873 qpair failed and we were unable to recover it. 00:24:12.873 [2024-04-24 22:15:54.883172] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.873 [2024-04-24 22:15:54.883307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.873 [2024-04-24 22:15:54.883336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.873 [2024-04-24 22:15:54.883352] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.873 [2024-04-24 22:15:54.883365] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.873 [2024-04-24 22:15:54.883404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.873 qpair failed and we were unable to recover it. 00:24:12.873 [2024-04-24 22:15:54.893217] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.873 [2024-04-24 22:15:54.893349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.873 [2024-04-24 22:15:54.893377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.873 [2024-04-24 22:15:54.893400] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.873 [2024-04-24 22:15:54.893415] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.873 [2024-04-24 22:15:54.893448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.873 qpair failed and we were unable to recover it. 00:24:12.873 [2024-04-24 22:15:54.903257] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.873 [2024-04-24 22:15:54.903384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.873 [2024-04-24 22:15:54.903421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.873 [2024-04-24 22:15:54.903437] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.873 [2024-04-24 22:15:54.903451] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.873 [2024-04-24 22:15:54.903483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.873 qpair failed and we were unable to recover it. 00:24:12.873 [2024-04-24 22:15:54.913241] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.873 [2024-04-24 22:15:54.913364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.873 [2024-04-24 22:15:54.913392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.873 [2024-04-24 22:15:54.913417] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.873 [2024-04-24 22:15:54.913431] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.873 [2024-04-24 22:15:54.913463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.873 qpair failed and we were unable to recover it. 00:24:12.873 [2024-04-24 22:15:54.923304] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.873 [2024-04-24 22:15:54.923439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.873 [2024-04-24 22:15:54.923468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.873 [2024-04-24 22:15:54.923483] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.873 [2024-04-24 22:15:54.923496] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.873 [2024-04-24 22:15:54.923528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.873 qpair failed and we were unable to recover it. 00:24:12.873 [2024-04-24 22:15:54.933318] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.873 [2024-04-24 22:15:54.933453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.873 [2024-04-24 22:15:54.933482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.873 [2024-04-24 22:15:54.933504] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.873 [2024-04-24 22:15:54.933518] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.873 [2024-04-24 22:15:54.933550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.873 qpair failed and we were unable to recover it. 00:24:12.873 [2024-04-24 22:15:54.943368] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.873 [2024-04-24 22:15:54.943522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.873 [2024-04-24 22:15:54.943550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.873 [2024-04-24 22:15:54.943566] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.873 [2024-04-24 22:15:54.943579] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.873 [2024-04-24 22:15:54.943611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.873 qpair failed and we were unable to recover it. 00:24:12.873 [2024-04-24 22:15:54.953457] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.873 [2024-04-24 22:15:54.953592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.873 [2024-04-24 22:15:54.953620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.873 [2024-04-24 22:15:54.953636] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.873 [2024-04-24 22:15:54.953649] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.873 [2024-04-24 22:15:54.953680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.873 qpair failed and we were unable to recover it. 00:24:12.873 [2024-04-24 22:15:54.963426] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.873 [2024-04-24 22:15:54.963568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.873 [2024-04-24 22:15:54.963596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.873 [2024-04-24 22:15:54.963612] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.873 [2024-04-24 22:15:54.963625] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.873 [2024-04-24 22:15:54.963657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.873 qpair failed and we were unable to recover it. 00:24:12.873 [2024-04-24 22:15:54.973457] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.873 [2024-04-24 22:15:54.973587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.873 [2024-04-24 22:15:54.973615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.873 [2024-04-24 22:15:54.973630] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.873 [2024-04-24 22:15:54.973643] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.873 [2024-04-24 22:15:54.973676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.873 qpair failed and we were unable to recover it. 00:24:12.873 [2024-04-24 22:15:54.983458] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.873 [2024-04-24 22:15:54.983594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.873 [2024-04-24 22:15:54.983623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.873 [2024-04-24 22:15:54.983639] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.873 [2024-04-24 22:15:54.983651] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.873 [2024-04-24 22:15:54.983683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.873 qpair failed and we were unable to recover it. 00:24:12.873 [2024-04-24 22:15:54.993480] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.873 [2024-04-24 22:15:54.993649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.873 [2024-04-24 22:15:54.993677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.873 [2024-04-24 22:15:54.993692] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.873 [2024-04-24 22:15:54.993705] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.873 [2024-04-24 22:15:54.993737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.873 qpair failed and we were unable to recover it. 00:24:12.873 [2024-04-24 22:15:55.003521] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.873 [2024-04-24 22:15:55.003654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.873 [2024-04-24 22:15:55.003682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.873 [2024-04-24 22:15:55.003698] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.873 [2024-04-24 22:15:55.003711] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.874 [2024-04-24 22:15:55.003743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.874 qpair failed and we were unable to recover it. 00:24:12.874 [2024-04-24 22:15:55.013585] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.874 [2024-04-24 22:15:55.013757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.874 [2024-04-24 22:15:55.013784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.874 [2024-04-24 22:15:55.013800] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.874 [2024-04-24 22:15:55.013813] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.874 [2024-04-24 22:15:55.013845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.874 qpair failed and we were unable to recover it. 00:24:12.874 [2024-04-24 22:15:55.023617] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.874 [2024-04-24 22:15:55.023746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.874 [2024-04-24 22:15:55.023779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.874 [2024-04-24 22:15:55.023795] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.874 [2024-04-24 22:15:55.023808] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.874 [2024-04-24 22:15:55.023840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.874 qpair failed and we were unable to recover it. 00:24:12.874 [2024-04-24 22:15:55.033649] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.874 [2024-04-24 22:15:55.033771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.874 [2024-04-24 22:15:55.033799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.874 [2024-04-24 22:15:55.033815] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.874 [2024-04-24 22:15:55.033828] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.874 [2024-04-24 22:15:55.033860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.874 qpair failed and we were unable to recover it. 00:24:12.874 [2024-04-24 22:15:55.043643] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.874 [2024-04-24 22:15:55.043791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.874 [2024-04-24 22:15:55.043818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.874 [2024-04-24 22:15:55.043834] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.874 [2024-04-24 22:15:55.043847] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.874 [2024-04-24 22:15:55.043879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.874 qpair failed and we were unable to recover it. 00:24:12.874 [2024-04-24 22:15:55.053653] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.874 [2024-04-24 22:15:55.053786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.874 [2024-04-24 22:15:55.053815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.874 [2024-04-24 22:15:55.053831] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.874 [2024-04-24 22:15:55.053844] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.874 [2024-04-24 22:15:55.053875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.874 qpair failed and we were unable to recover it. 00:24:12.874 [2024-04-24 22:15:55.063688] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.874 [2024-04-24 22:15:55.063823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.874 [2024-04-24 22:15:55.063851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.874 [2024-04-24 22:15:55.063867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.874 [2024-04-24 22:15:55.063880] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.874 [2024-04-24 22:15:55.063918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.874 qpair failed and we were unable to recover it. 00:24:12.874 [2024-04-24 22:15:55.073775] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.874 [2024-04-24 22:15:55.073937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.874 [2024-04-24 22:15:55.073964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.874 [2024-04-24 22:15:55.073980] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.874 [2024-04-24 22:15:55.073993] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.874 [2024-04-24 22:15:55.074026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.874 qpair failed and we were unable to recover it. 00:24:12.874 [2024-04-24 22:15:55.083779] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.874 [2024-04-24 22:15:55.083913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.874 [2024-04-24 22:15:55.083940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.874 [2024-04-24 22:15:55.083956] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.874 [2024-04-24 22:15:55.083969] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.874 [2024-04-24 22:15:55.084001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.874 qpair failed and we were unable to recover it. 00:24:12.874 [2024-04-24 22:15:55.093815] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.874 [2024-04-24 22:15:55.093990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.874 [2024-04-24 22:15:55.094018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.874 [2024-04-24 22:15:55.094034] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.874 [2024-04-24 22:15:55.094047] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.874 [2024-04-24 22:15:55.094078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.874 qpair failed and we were unable to recover it. 00:24:12.874 [2024-04-24 22:15:55.103835] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.874 [2024-04-24 22:15:55.103962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.874 [2024-04-24 22:15:55.103990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.874 [2024-04-24 22:15:55.104006] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.874 [2024-04-24 22:15:55.104019] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.874 [2024-04-24 22:15:55.104051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.874 qpair failed and we were unable to recover it. 00:24:12.874 [2024-04-24 22:15:55.113844] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.874 [2024-04-24 22:15:55.113969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.874 [2024-04-24 22:15:55.114003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.874 [2024-04-24 22:15:55.114020] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.874 [2024-04-24 22:15:55.114033] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.874 [2024-04-24 22:15:55.114064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.874 qpair failed and we were unable to recover it. 00:24:12.874 [2024-04-24 22:15:55.123914] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.874 [2024-04-24 22:15:55.124075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.874 [2024-04-24 22:15:55.124103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.874 [2024-04-24 22:15:55.124119] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.874 [2024-04-24 22:15:55.124132] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:12.874 [2024-04-24 22:15:55.124164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.874 qpair failed and we were unable to recover it. 00:24:13.134 [2024-04-24 22:15:55.133917] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.134 [2024-04-24 22:15:55.134048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.134 [2024-04-24 22:15:55.134076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.134 [2024-04-24 22:15:55.134092] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.134 [2024-04-24 22:15:55.134106] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.134 [2024-04-24 22:15:55.134138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.134 qpair failed and we were unable to recover it. 00:24:13.134 [2024-04-24 22:15:55.143913] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.134 [2024-04-24 22:15:55.144044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.134 [2024-04-24 22:15:55.144072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.134 [2024-04-24 22:15:55.144087] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.134 [2024-04-24 22:15:55.144100] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.134 [2024-04-24 22:15:55.144131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.134 qpair failed and we were unable to recover it. 00:24:13.134 [2024-04-24 22:15:55.153952] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.134 [2024-04-24 22:15:55.154086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.134 [2024-04-24 22:15:55.154114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.134 [2024-04-24 22:15:55.154129] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.134 [2024-04-24 22:15:55.154143] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.134 [2024-04-24 22:15:55.154180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.134 qpair failed and we were unable to recover it. 00:24:13.134 [2024-04-24 22:15:55.164024] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.134 [2024-04-24 22:15:55.164158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.134 [2024-04-24 22:15:55.164186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.134 [2024-04-24 22:15:55.164202] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.134 [2024-04-24 22:15:55.164215] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.134 [2024-04-24 22:15:55.164246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.134 qpair failed and we were unable to recover it. 00:24:13.134 [2024-04-24 22:15:55.174050] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.134 [2024-04-24 22:15:55.174179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.134 [2024-04-24 22:15:55.174207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.134 [2024-04-24 22:15:55.174222] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.134 [2024-04-24 22:15:55.174236] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.134 [2024-04-24 22:15:55.174268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.134 qpair failed and we were unable to recover it. 00:24:13.134 [2024-04-24 22:15:55.184045] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.134 [2024-04-24 22:15:55.184177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.134 [2024-04-24 22:15:55.184205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.134 [2024-04-24 22:15:55.184221] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.134 [2024-04-24 22:15:55.184234] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.134 [2024-04-24 22:15:55.184266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.134 qpair failed and we were unable to recover it. 00:24:13.135 [2024-04-24 22:15:55.194068] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.135 [2024-04-24 22:15:55.194191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.135 [2024-04-24 22:15:55.194219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.135 [2024-04-24 22:15:55.194235] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.135 [2024-04-24 22:15:55.194248] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.135 [2024-04-24 22:15:55.194280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.135 qpair failed and we were unable to recover it. 00:24:13.135 [2024-04-24 22:15:55.204127] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.135 [2024-04-24 22:15:55.204270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.135 [2024-04-24 22:15:55.204299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.135 [2024-04-24 22:15:55.204314] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.135 [2024-04-24 22:15:55.204327] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.135 [2024-04-24 22:15:55.204360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.135 qpair failed and we were unable to recover it. 00:24:13.135 [2024-04-24 22:15:55.214160] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.135 [2024-04-24 22:15:55.214326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.135 [2024-04-24 22:15:55.214353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.135 [2024-04-24 22:15:55.214369] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.135 [2024-04-24 22:15:55.214382] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.135 [2024-04-24 22:15:55.214421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.135 qpair failed and we were unable to recover it. 00:24:13.135 [2024-04-24 22:15:55.224262] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.135 [2024-04-24 22:15:55.224388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.135 [2024-04-24 22:15:55.224422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.135 [2024-04-24 22:15:55.224438] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.135 [2024-04-24 22:15:55.224451] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.135 [2024-04-24 22:15:55.224482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.135 qpair failed and we were unable to recover it. 00:24:13.135 [2024-04-24 22:15:55.234190] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.135 [2024-04-24 22:15:55.234321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.135 [2024-04-24 22:15:55.234349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.135 [2024-04-24 22:15:55.234365] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.135 [2024-04-24 22:15:55.234378] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.135 [2024-04-24 22:15:55.234417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.135 qpair failed and we were unable to recover it. 00:24:13.135 [2024-04-24 22:15:55.244237] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.135 [2024-04-24 22:15:55.244368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.135 [2024-04-24 22:15:55.244401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.135 [2024-04-24 22:15:55.244419] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.135 [2024-04-24 22:15:55.244438] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.135 [2024-04-24 22:15:55.244472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.135 qpair failed and we were unable to recover it. 00:24:13.135 [2024-04-24 22:15:55.254300] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.135 [2024-04-24 22:15:55.254456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.135 [2024-04-24 22:15:55.254485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.135 [2024-04-24 22:15:55.254501] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.135 [2024-04-24 22:15:55.254514] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.135 [2024-04-24 22:15:55.254546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.135 qpair failed and we were unable to recover it. 00:24:13.135 [2024-04-24 22:15:55.264299] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.135 [2024-04-24 22:15:55.264437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.135 [2024-04-24 22:15:55.264465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.135 [2024-04-24 22:15:55.264481] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.135 [2024-04-24 22:15:55.264494] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.135 [2024-04-24 22:15:55.264526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.135 qpair failed and we were unable to recover it. 00:24:13.135 [2024-04-24 22:15:55.274317] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.135 [2024-04-24 22:15:55.274443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.135 [2024-04-24 22:15:55.274471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.135 [2024-04-24 22:15:55.274487] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.135 [2024-04-24 22:15:55.274500] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.135 [2024-04-24 22:15:55.274532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.135 qpair failed and we were unable to recover it. 00:24:13.135 [2024-04-24 22:15:55.284344] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.135 [2024-04-24 22:15:55.284486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.135 [2024-04-24 22:15:55.284514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.135 [2024-04-24 22:15:55.284529] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.135 [2024-04-24 22:15:55.284543] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.135 [2024-04-24 22:15:55.284575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.135 qpair failed and we were unable to recover it. 00:24:13.135 [2024-04-24 22:15:55.294383] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.135 [2024-04-24 22:15:55.294555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.135 [2024-04-24 22:15:55.294583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.135 [2024-04-24 22:15:55.294598] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.135 [2024-04-24 22:15:55.294611] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.135 [2024-04-24 22:15:55.294643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.135 qpair failed and we were unable to recover it. 00:24:13.135 [2024-04-24 22:15:55.304411] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.135 [2024-04-24 22:15:55.304584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.135 [2024-04-24 22:15:55.304611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.135 [2024-04-24 22:15:55.304626] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.135 [2024-04-24 22:15:55.304640] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.135 [2024-04-24 22:15:55.304672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.135 qpair failed and we were unable to recover it. 00:24:13.135 [2024-04-24 22:15:55.314454] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.135 [2024-04-24 22:15:55.314607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.135 [2024-04-24 22:15:55.314635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.135 [2024-04-24 22:15:55.314650] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.135 [2024-04-24 22:15:55.314663] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.135 [2024-04-24 22:15:55.314695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.135 qpair failed and we were unable to recover it. 00:24:13.135 [2024-04-24 22:15:55.324488] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.136 [2024-04-24 22:15:55.324619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.136 [2024-04-24 22:15:55.324647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.136 [2024-04-24 22:15:55.324662] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.136 [2024-04-24 22:15:55.324676] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.136 [2024-04-24 22:15:55.324708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.136 qpair failed and we were unable to recover it. 00:24:13.136 [2024-04-24 22:15:55.334504] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.136 [2024-04-24 22:15:55.334665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.136 [2024-04-24 22:15:55.334695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.136 [2024-04-24 22:15:55.334718] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.136 [2024-04-24 22:15:55.334733] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.136 [2024-04-24 22:15:55.334768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.136 qpair failed and we were unable to recover it. 00:24:13.136 [2024-04-24 22:15:55.344533] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.136 [2024-04-24 22:15:55.344663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.136 [2024-04-24 22:15:55.344692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.136 [2024-04-24 22:15:55.344707] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.136 [2024-04-24 22:15:55.344721] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.136 [2024-04-24 22:15:55.344753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.136 qpair failed and we were unable to recover it. 00:24:13.136 [2024-04-24 22:15:55.354564] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.136 [2024-04-24 22:15:55.354688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.136 [2024-04-24 22:15:55.354716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.136 [2024-04-24 22:15:55.354732] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.136 [2024-04-24 22:15:55.354746] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.136 [2024-04-24 22:15:55.354777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.136 qpair failed and we were unable to recover it. 00:24:13.136 [2024-04-24 22:15:55.364600] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.136 [2024-04-24 22:15:55.364731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.136 [2024-04-24 22:15:55.364758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.136 [2024-04-24 22:15:55.364774] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.136 [2024-04-24 22:15:55.364787] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.136 [2024-04-24 22:15:55.364819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.136 qpair failed and we were unable to recover it. 00:24:13.136 [2024-04-24 22:15:55.374623] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.136 [2024-04-24 22:15:55.374791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.136 [2024-04-24 22:15:55.374818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.136 [2024-04-24 22:15:55.374835] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.136 [2024-04-24 22:15:55.374848] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.136 [2024-04-24 22:15:55.374880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.136 qpair failed and we were unable to recover it. 00:24:13.136 [2024-04-24 22:15:55.384656] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.136 [2024-04-24 22:15:55.384782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.136 [2024-04-24 22:15:55.384810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.136 [2024-04-24 22:15:55.384826] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.136 [2024-04-24 22:15:55.384839] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.136 [2024-04-24 22:15:55.384870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.136 qpair failed and we were unable to recover it. 00:24:13.395 [2024-04-24 22:15:55.394670] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.395 [2024-04-24 22:15:55.394796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.395 [2024-04-24 22:15:55.394824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.395 [2024-04-24 22:15:55.394840] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.395 [2024-04-24 22:15:55.394852] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.395 [2024-04-24 22:15:55.394884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.395 qpair failed and we were unable to recover it. 00:24:13.396 [2024-04-24 22:15:55.404803] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.396 [2024-04-24 22:15:55.404939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.396 [2024-04-24 22:15:55.404966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.396 [2024-04-24 22:15:55.404982] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.396 [2024-04-24 22:15:55.404996] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.396 [2024-04-24 22:15:55.405027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.396 qpair failed and we were unable to recover it. 00:24:13.396 [2024-04-24 22:15:55.414750] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.396 [2024-04-24 22:15:55.414881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.396 [2024-04-24 22:15:55.414909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.396 [2024-04-24 22:15:55.414925] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.396 [2024-04-24 22:15:55.414938] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.396 [2024-04-24 22:15:55.414970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.396 qpair failed and we were unable to recover it. 00:24:13.396 [2024-04-24 22:15:55.424794] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.396 [2024-04-24 22:15:55.424963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.396 [2024-04-24 22:15:55.424991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.396 [2024-04-24 22:15:55.425013] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.396 [2024-04-24 22:15:55.425028] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.396 [2024-04-24 22:15:55.425060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.396 qpair failed and we were unable to recover it. 00:24:13.396 [2024-04-24 22:15:55.434798] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.396 [2024-04-24 22:15:55.434931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.396 [2024-04-24 22:15:55.434959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.396 [2024-04-24 22:15:55.434975] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.396 [2024-04-24 22:15:55.434988] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.396 [2024-04-24 22:15:55.435020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.396 qpair failed and we were unable to recover it. 00:24:13.396 [2024-04-24 22:15:55.444822] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.396 [2024-04-24 22:15:55.444949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.396 [2024-04-24 22:15:55.444977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.396 [2024-04-24 22:15:55.444993] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.396 [2024-04-24 22:15:55.445006] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.396 [2024-04-24 22:15:55.445038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.396 qpair failed and we were unable to recover it. 00:24:13.396 [2024-04-24 22:15:55.454842] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.396 [2024-04-24 22:15:55.455022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.396 [2024-04-24 22:15:55.455051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.396 [2024-04-24 22:15:55.455068] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.396 [2024-04-24 22:15:55.455082] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.396 [2024-04-24 22:15:55.455114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.396 qpair failed and we were unable to recover it. 00:24:13.396 [2024-04-24 22:15:55.464882] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.396 [2024-04-24 22:15:55.465008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.396 [2024-04-24 22:15:55.465036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.396 [2024-04-24 22:15:55.465053] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.396 [2024-04-24 22:15:55.465066] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.396 [2024-04-24 22:15:55.465097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.396 qpair failed and we were unable to recover it. 00:24:13.396 [2024-04-24 22:15:55.474913] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.396 [2024-04-24 22:15:55.475040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.396 [2024-04-24 22:15:55.475068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.396 [2024-04-24 22:15:55.475083] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.396 [2024-04-24 22:15:55.475096] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.396 [2024-04-24 22:15:55.475128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.396 qpair failed and we were unable to recover it. 00:24:13.396 [2024-04-24 22:15:55.484959] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.396 [2024-04-24 22:15:55.485089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.396 [2024-04-24 22:15:55.485117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.396 [2024-04-24 22:15:55.485133] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.396 [2024-04-24 22:15:55.485145] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.396 [2024-04-24 22:15:55.485177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.396 qpair failed and we were unable to recover it. 00:24:13.396 [2024-04-24 22:15:55.494983] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.396 [2024-04-24 22:15:55.495159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.396 [2024-04-24 22:15:55.495187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.396 [2024-04-24 22:15:55.495203] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.396 [2024-04-24 22:15:55.495217] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.396 [2024-04-24 22:15:55.495248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.396 qpair failed and we were unable to recover it. 00:24:13.396 [2024-04-24 22:15:55.504995] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.396 [2024-04-24 22:15:55.505126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.396 [2024-04-24 22:15:55.505154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.396 [2024-04-24 22:15:55.505170] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.396 [2024-04-24 22:15:55.505184] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.396 [2024-04-24 22:15:55.505216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.396 qpair failed and we were unable to recover it. 00:24:13.396 [2024-04-24 22:15:55.515029] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.396 [2024-04-24 22:15:55.515152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.397 [2024-04-24 22:15:55.515186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.397 [2024-04-24 22:15:55.515203] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.397 [2024-04-24 22:15:55.515216] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.397 [2024-04-24 22:15:55.515249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.397 qpair failed and we were unable to recover it. 00:24:13.397 [2024-04-24 22:15:55.525095] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.397 [2024-04-24 22:15:55.525222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.397 [2024-04-24 22:15:55.525251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.397 [2024-04-24 22:15:55.525267] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.397 [2024-04-24 22:15:55.525281] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.397 [2024-04-24 22:15:55.525313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.397 qpair failed and we were unable to recover it. 00:24:13.397 [2024-04-24 22:15:55.535085] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.397 [2024-04-24 22:15:55.535212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.397 [2024-04-24 22:15:55.535240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.397 [2024-04-24 22:15:55.535256] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.397 [2024-04-24 22:15:55.535270] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.397 [2024-04-24 22:15:55.535302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.397 qpair failed and we were unable to recover it. 00:24:13.397 [2024-04-24 22:15:55.545122] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.397 [2024-04-24 22:15:55.545295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.397 [2024-04-24 22:15:55.545322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.397 [2024-04-24 22:15:55.545338] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.397 [2024-04-24 22:15:55.545351] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.397 [2024-04-24 22:15:55.545383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.397 qpair failed and we were unable to recover it. 00:24:13.397 [2024-04-24 22:15:55.555126] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.397 [2024-04-24 22:15:55.555255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.397 [2024-04-24 22:15:55.555283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.397 [2024-04-24 22:15:55.555298] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.397 [2024-04-24 22:15:55.555311] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.397 [2024-04-24 22:15:55.555349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.397 qpair failed and we were unable to recover it. 00:24:13.397 [2024-04-24 22:15:55.565197] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.397 [2024-04-24 22:15:55.565344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.397 [2024-04-24 22:15:55.565372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.397 [2024-04-24 22:15:55.565388] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.397 [2024-04-24 22:15:55.565411] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.397 [2024-04-24 22:15:55.565444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.397 qpair failed and we were unable to recover it. 00:24:13.397 [2024-04-24 22:15:55.575194] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.397 [2024-04-24 22:15:55.575318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.397 [2024-04-24 22:15:55.575346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.397 [2024-04-24 22:15:55.575362] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.397 [2024-04-24 22:15:55.575376] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.397 [2024-04-24 22:15:55.575414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.397 qpair failed and we were unable to recover it. 00:24:13.397 [2024-04-24 22:15:55.585211] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.397 [2024-04-24 22:15:55.585335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.397 [2024-04-24 22:15:55.585363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.397 [2024-04-24 22:15:55.585379] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.397 [2024-04-24 22:15:55.585392] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.397 [2024-04-24 22:15:55.585434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.397 qpair failed and we were unable to recover it. 00:24:13.397 [2024-04-24 22:15:55.595250] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.397 [2024-04-24 22:15:55.595373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.397 [2024-04-24 22:15:55.595409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.397 [2024-04-24 22:15:55.595427] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.397 [2024-04-24 22:15:55.595451] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.397 [2024-04-24 22:15:55.595487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.397 qpair failed and we were unable to recover it. 00:24:13.397 [2024-04-24 22:15:55.605344] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.397 [2024-04-24 22:15:55.605497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.397 [2024-04-24 22:15:55.605532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.397 [2024-04-24 22:15:55.605549] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.397 [2024-04-24 22:15:55.605562] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.397 [2024-04-24 22:15:55.605595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.397 qpair failed and we were unable to recover it. 00:24:13.397 [2024-04-24 22:15:55.615324] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.397 [2024-04-24 22:15:55.615494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.397 [2024-04-24 22:15:55.615524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.397 [2024-04-24 22:15:55.615540] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.397 [2024-04-24 22:15:55.615553] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.397 [2024-04-24 22:15:55.615585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.397 qpair failed and we were unable to recover it. 00:24:13.397 [2024-04-24 22:15:55.625355] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.397 [2024-04-24 22:15:55.625487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.397 [2024-04-24 22:15:55.625516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.397 [2024-04-24 22:15:55.625532] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.397 [2024-04-24 22:15:55.625545] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.397 [2024-04-24 22:15:55.625577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.397 qpair failed and we were unable to recover it. 00:24:13.397 [2024-04-24 22:15:55.635368] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.397 [2024-04-24 22:15:55.635527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.397 [2024-04-24 22:15:55.635556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.397 [2024-04-24 22:15:55.635572] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.397 [2024-04-24 22:15:55.635585] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.397 [2024-04-24 22:15:55.635617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.397 qpair failed and we were unable to recover it. 00:24:13.397 [2024-04-24 22:15:55.645484] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.398 [2024-04-24 22:15:55.645627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.398 [2024-04-24 22:15:55.645653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.398 [2024-04-24 22:15:55.645668] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.398 [2024-04-24 22:15:55.645688] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.398 [2024-04-24 22:15:55.645720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.398 qpair failed and we were unable to recover it. 00:24:13.657 [2024-04-24 22:15:55.655467] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.657 [2024-04-24 22:15:55.655614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.657 [2024-04-24 22:15:55.655642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.657 [2024-04-24 22:15:55.655659] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.657 [2024-04-24 22:15:55.655672] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.657 [2024-04-24 22:15:55.655704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.657 qpair failed and we were unable to recover it. 00:24:13.657 [2024-04-24 22:15:55.665495] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.657 [2024-04-24 22:15:55.665649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.657 [2024-04-24 22:15:55.665677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.657 [2024-04-24 22:15:55.665693] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.657 [2024-04-24 22:15:55.665706] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.657 [2024-04-24 22:15:55.665738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.657 qpair failed and we were unable to recover it. 00:24:13.657 [2024-04-24 22:15:55.675528] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.657 [2024-04-24 22:15:55.675675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.657 [2024-04-24 22:15:55.675702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.657 [2024-04-24 22:15:55.675718] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.657 [2024-04-24 22:15:55.675732] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.657 [2024-04-24 22:15:55.675764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.657 qpair failed and we were unable to recover it. 00:24:13.657 [2024-04-24 22:15:55.685550] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.657 [2024-04-24 22:15:55.685679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.657 [2024-04-24 22:15:55.685707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.657 [2024-04-24 22:15:55.685723] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.657 [2024-04-24 22:15:55.685735] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.657 [2024-04-24 22:15:55.685767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.657 qpair failed and we were unable to recover it. 00:24:13.657 [2024-04-24 22:15:55.695588] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.657 [2024-04-24 22:15:55.695729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.657 [2024-04-24 22:15:55.695757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.657 [2024-04-24 22:15:55.695773] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.657 [2024-04-24 22:15:55.695786] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.657 [2024-04-24 22:15:55.695829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.657 qpair failed and we were unable to recover it. 00:24:13.657 [2024-04-24 22:15:55.705621] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.658 [2024-04-24 22:15:55.705755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.658 [2024-04-24 22:15:55.705783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.658 [2024-04-24 22:15:55.705798] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.658 [2024-04-24 22:15:55.705811] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.658 [2024-04-24 22:15:55.705843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.658 qpair failed and we were unable to recover it. 00:24:13.658 [2024-04-24 22:15:55.715614] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.658 [2024-04-24 22:15:55.715748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.658 [2024-04-24 22:15:55.715776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.658 [2024-04-24 22:15:55.715791] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.658 [2024-04-24 22:15:55.715805] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.658 [2024-04-24 22:15:55.715836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.658 qpair failed and we were unable to recover it. 00:24:13.658 [2024-04-24 22:15:55.725687] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.658 [2024-04-24 22:15:55.725820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.658 [2024-04-24 22:15:55.725848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.658 [2024-04-24 22:15:55.725864] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.658 [2024-04-24 22:15:55.725877] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.658 [2024-04-24 22:15:55.725908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.658 qpair failed and we were unable to recover it. 00:24:13.658 [2024-04-24 22:15:55.735697] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.658 [2024-04-24 22:15:55.735826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.658 [2024-04-24 22:15:55.735853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.658 [2024-04-24 22:15:55.735875] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.658 [2024-04-24 22:15:55.735889] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.658 [2024-04-24 22:15:55.735921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.658 qpair failed and we were unable to recover it. 00:24:13.658 [2024-04-24 22:15:55.745693] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.658 [2024-04-24 22:15:55.745826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.658 [2024-04-24 22:15:55.745853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.658 [2024-04-24 22:15:55.745869] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.658 [2024-04-24 22:15:55.745882] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.658 [2024-04-24 22:15:55.745914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.658 qpair failed and we were unable to recover it. 00:24:13.658 [2024-04-24 22:15:55.755771] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.658 [2024-04-24 22:15:55.755898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.658 [2024-04-24 22:15:55.755925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.658 [2024-04-24 22:15:55.755941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.658 [2024-04-24 22:15:55.755954] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.658 [2024-04-24 22:15:55.755986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.658 qpair failed and we were unable to recover it. 00:24:13.658 [2024-04-24 22:15:55.765839] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.658 [2024-04-24 22:15:55.765979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.658 [2024-04-24 22:15:55.766006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.658 [2024-04-24 22:15:55.766022] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.658 [2024-04-24 22:15:55.766035] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.658 [2024-04-24 22:15:55.766067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.658 qpair failed and we were unable to recover it. 00:24:13.658 [2024-04-24 22:15:55.775818] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.658 [2024-04-24 22:15:55.775949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.658 [2024-04-24 22:15:55.775977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.658 [2024-04-24 22:15:55.775993] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.658 [2024-04-24 22:15:55.776006] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.658 [2024-04-24 22:15:55.776037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.658 qpair failed and we were unable to recover it. 00:24:13.658 [2024-04-24 22:15:55.785828] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.658 [2024-04-24 22:15:55.785957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.658 [2024-04-24 22:15:55.785994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.658 [2024-04-24 22:15:55.786010] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.658 [2024-04-24 22:15:55.786023] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.658 [2024-04-24 22:15:55.786055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.658 qpair failed and we were unable to recover it. 00:24:13.658 [2024-04-24 22:15:55.795871] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.658 [2024-04-24 22:15:55.796009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.658 [2024-04-24 22:15:55.796037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.658 [2024-04-24 22:15:55.796053] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.658 [2024-04-24 22:15:55.796066] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.658 [2024-04-24 22:15:55.796098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.658 qpair failed and we were unable to recover it. 00:24:13.658 [2024-04-24 22:15:55.805900] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.658 [2024-04-24 22:15:55.806034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.658 [2024-04-24 22:15:55.806065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.658 [2024-04-24 22:15:55.806081] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.658 [2024-04-24 22:15:55.806094] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.658 [2024-04-24 22:15:55.806126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.658 qpair failed and we were unable to recover it. 00:24:13.658 [2024-04-24 22:15:55.815987] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.658 [2024-04-24 22:15:55.816116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.658 [2024-04-24 22:15:55.816144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.658 [2024-04-24 22:15:55.816160] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.658 [2024-04-24 22:15:55.816173] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.658 [2024-04-24 22:15:55.816214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.658 qpair failed and we were unable to recover it. 00:24:13.658 [2024-04-24 22:15:55.825966] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.658 [2024-04-24 22:15:55.826104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.658 [2024-04-24 22:15:55.826138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.658 [2024-04-24 22:15:55.826160] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.658 [2024-04-24 22:15:55.826174] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.658 [2024-04-24 22:15:55.826206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.658 qpair failed and we were unable to recover it. 00:24:13.658 [2024-04-24 22:15:55.836037] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.658 [2024-04-24 22:15:55.836162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.658 [2024-04-24 22:15:55.836189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.659 [2024-04-24 22:15:55.836205] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.659 [2024-04-24 22:15:55.836219] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.659 [2024-04-24 22:15:55.836250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.659 qpair failed and we were unable to recover it. 00:24:13.659 [2024-04-24 22:15:55.846057] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.659 [2024-04-24 22:15:55.846188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.659 [2024-04-24 22:15:55.846217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.659 [2024-04-24 22:15:55.846233] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.659 [2024-04-24 22:15:55.846246] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.659 [2024-04-24 22:15:55.846278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.659 qpair failed and we were unable to recover it. 00:24:13.659 [2024-04-24 22:15:55.856047] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.659 [2024-04-24 22:15:55.856177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.659 [2024-04-24 22:15:55.856206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.659 [2024-04-24 22:15:55.856222] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.659 [2024-04-24 22:15:55.856235] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.659 [2024-04-24 22:15:55.856267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.659 qpair failed and we were unable to recover it. 00:24:13.659 [2024-04-24 22:15:55.866096] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.659 [2024-04-24 22:15:55.866244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.659 [2024-04-24 22:15:55.866272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.659 [2024-04-24 22:15:55.866288] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.659 [2024-04-24 22:15:55.866301] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.659 [2024-04-24 22:15:55.866333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.659 qpair failed and we were unable to recover it. 00:24:13.659 [2024-04-24 22:15:55.876125] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.659 [2024-04-24 22:15:55.876264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.659 [2024-04-24 22:15:55.876292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.659 [2024-04-24 22:15:55.876307] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.659 [2024-04-24 22:15:55.876320] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.659 [2024-04-24 22:15:55.876353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.659 qpair failed and we were unable to recover it. 00:24:13.659 [2024-04-24 22:15:55.886152] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.659 [2024-04-24 22:15:55.886284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.659 [2024-04-24 22:15:55.886312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.659 [2024-04-24 22:15:55.886328] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.659 [2024-04-24 22:15:55.886341] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.659 [2024-04-24 22:15:55.886372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.659 qpair failed and we were unable to recover it. 00:24:13.659 [2024-04-24 22:15:55.896178] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.659 [2024-04-24 22:15:55.896303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.659 [2024-04-24 22:15:55.896342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.659 [2024-04-24 22:15:55.896358] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.659 [2024-04-24 22:15:55.896371] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.659 [2024-04-24 22:15:55.896411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.659 qpair failed and we were unable to recover it. 00:24:13.659 [2024-04-24 22:15:55.906166] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.659 [2024-04-24 22:15:55.906290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.659 [2024-04-24 22:15:55.906318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.659 [2024-04-24 22:15:55.906333] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.659 [2024-04-24 22:15:55.906346] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.659 [2024-04-24 22:15:55.906379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.659 qpair failed and we were unable to recover it. 00:24:13.919 [2024-04-24 22:15:55.916233] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.919 [2024-04-24 22:15:55.916357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.919 [2024-04-24 22:15:55.916392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.919 [2024-04-24 22:15:55.916421] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.919 [2024-04-24 22:15:55.916434] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.919 [2024-04-24 22:15:55.916467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.919 qpair failed and we were unable to recover it. 00:24:13.919 [2024-04-24 22:15:55.926269] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.919 [2024-04-24 22:15:55.926411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.919 [2024-04-24 22:15:55.926439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.919 [2024-04-24 22:15:55.926455] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.919 [2024-04-24 22:15:55.926468] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.919 [2024-04-24 22:15:55.926500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.919 qpair failed and we were unable to recover it. 00:24:13.919 [2024-04-24 22:15:55.936270] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.919 [2024-04-24 22:15:55.936410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.919 [2024-04-24 22:15:55.936439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.919 [2024-04-24 22:15:55.936455] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.919 [2024-04-24 22:15:55.936468] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.919 [2024-04-24 22:15:55.936500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.919 qpair failed and we were unable to recover it. 00:24:13.919 [2024-04-24 22:15:55.946363] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.919 [2024-04-24 22:15:55.946494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.919 [2024-04-24 22:15:55.946522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.919 [2024-04-24 22:15:55.946538] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.919 [2024-04-24 22:15:55.946551] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.919 [2024-04-24 22:15:55.946583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.919 qpair failed and we were unable to recover it. 00:24:13.919 [2024-04-24 22:15:55.956347] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.919 [2024-04-24 22:15:55.956516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.919 [2024-04-24 22:15:55.956545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.919 [2024-04-24 22:15:55.956561] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.919 [2024-04-24 22:15:55.956574] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.919 [2024-04-24 22:15:55.956612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.919 qpair failed and we were unable to recover it. 00:24:13.919 [2024-04-24 22:15:55.966392] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.919 [2024-04-24 22:15:55.966558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.919 [2024-04-24 22:15:55.966586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.919 [2024-04-24 22:15:55.966602] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.919 [2024-04-24 22:15:55.966615] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.919 [2024-04-24 22:15:55.966647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.919 qpair failed and we were unable to recover it. 00:24:13.919 [2024-04-24 22:15:55.976460] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.919 [2024-04-24 22:15:55.976592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.919 [2024-04-24 22:15:55.976619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.919 [2024-04-24 22:15:55.976635] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.919 [2024-04-24 22:15:55.976648] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.919 [2024-04-24 22:15:55.976680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.919 qpair failed and we were unable to recover it. 00:24:13.919 [2024-04-24 22:15:55.986466] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.919 [2024-04-24 22:15:55.986594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.919 [2024-04-24 22:15:55.986621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.919 [2024-04-24 22:15:55.986637] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.919 [2024-04-24 22:15:55.986651] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.919 [2024-04-24 22:15:55.986683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.919 qpair failed and we were unable to recover it. 00:24:13.919 [2024-04-24 22:15:55.996464] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.919 [2024-04-24 22:15:55.996600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.919 [2024-04-24 22:15:55.996628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.919 [2024-04-24 22:15:55.996644] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.919 [2024-04-24 22:15:55.996657] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.920 [2024-04-24 22:15:55.996689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.920 qpair failed and we were unable to recover it. 00:24:13.920 [2024-04-24 22:15:56.006514] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.920 [2024-04-24 22:15:56.006647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.920 [2024-04-24 22:15:56.006680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.920 [2024-04-24 22:15:56.006697] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.920 [2024-04-24 22:15:56.006710] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.920 [2024-04-24 22:15:56.006743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.920 qpair failed and we were unable to recover it. 00:24:13.920 [2024-04-24 22:15:56.016529] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.920 [2024-04-24 22:15:56.016663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.920 [2024-04-24 22:15:56.016701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.920 [2024-04-24 22:15:56.016717] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.920 [2024-04-24 22:15:56.016730] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.920 [2024-04-24 22:15:56.016763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.920 qpair failed and we were unable to recover it. 00:24:13.920 [2024-04-24 22:15:56.026570] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.920 [2024-04-24 22:15:56.026704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.920 [2024-04-24 22:15:56.026732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.920 [2024-04-24 22:15:56.026748] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.920 [2024-04-24 22:15:56.026761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.920 [2024-04-24 22:15:56.026793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.920 qpair failed and we were unable to recover it. 00:24:13.920 [2024-04-24 22:15:56.036587] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.920 [2024-04-24 22:15:56.036709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.920 [2024-04-24 22:15:56.036736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.920 [2024-04-24 22:15:56.036752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.920 [2024-04-24 22:15:56.036765] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.920 [2024-04-24 22:15:56.036797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.920 qpair failed and we were unable to recover it. 00:24:13.920 [2024-04-24 22:15:56.046632] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.920 [2024-04-24 22:15:56.046764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.920 [2024-04-24 22:15:56.046792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.920 [2024-04-24 22:15:56.046808] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.920 [2024-04-24 22:15:56.046830] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.920 [2024-04-24 22:15:56.046866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.920 qpair failed and we were unable to recover it. 00:24:13.920 [2024-04-24 22:15:56.056674] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.920 [2024-04-24 22:15:56.056828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.920 [2024-04-24 22:15:56.056856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.920 [2024-04-24 22:15:56.056872] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.920 [2024-04-24 22:15:56.056885] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.920 [2024-04-24 22:15:56.056918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.920 qpair failed and we were unable to recover it. 00:24:13.920 [2024-04-24 22:15:56.066647] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.920 [2024-04-24 22:15:56.066817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.920 [2024-04-24 22:15:56.066844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.920 [2024-04-24 22:15:56.066860] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.920 [2024-04-24 22:15:56.066873] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.920 [2024-04-24 22:15:56.066905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.920 qpair failed and we were unable to recover it. 00:24:13.920 [2024-04-24 22:15:56.076691] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.920 [2024-04-24 22:15:56.076816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.920 [2024-04-24 22:15:56.076844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.920 [2024-04-24 22:15:56.076859] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.920 [2024-04-24 22:15:56.076872] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.920 [2024-04-24 22:15:56.076904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.920 qpair failed and we were unable to recover it. 00:24:13.920 [2024-04-24 22:15:56.086737] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.920 [2024-04-24 22:15:56.086870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.920 [2024-04-24 22:15:56.086898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.920 [2024-04-24 22:15:56.086914] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.920 [2024-04-24 22:15:56.086927] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.920 [2024-04-24 22:15:56.086958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.920 qpair failed and we were unable to recover it. 00:24:13.920 [2024-04-24 22:15:56.096766] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.920 [2024-04-24 22:15:56.096897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.920 [2024-04-24 22:15:56.096923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.920 [2024-04-24 22:15:56.096939] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.920 [2024-04-24 22:15:56.096952] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.920 [2024-04-24 22:15:56.096984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.920 qpair failed and we were unable to recover it. 00:24:13.920 [2024-04-24 22:15:56.106768] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.920 [2024-04-24 22:15:56.106904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.920 [2024-04-24 22:15:56.106933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.920 [2024-04-24 22:15:56.106949] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.920 [2024-04-24 22:15:56.106962] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.920 [2024-04-24 22:15:56.106994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.920 qpair failed and we were unable to recover it. 00:24:13.920 [2024-04-24 22:15:56.116842] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.920 [2024-04-24 22:15:56.117006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.920 [2024-04-24 22:15:56.117035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.920 [2024-04-24 22:15:56.117050] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.920 [2024-04-24 22:15:56.117063] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.920 [2024-04-24 22:15:56.117095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.920 qpair failed and we were unable to recover it. 00:24:13.920 [2024-04-24 22:15:56.126846] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.920 [2024-04-24 22:15:56.126999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.920 [2024-04-24 22:15:56.127026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.920 [2024-04-24 22:15:56.127042] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.920 [2024-04-24 22:15:56.127055] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.920 [2024-04-24 22:15:56.127087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.920 qpair failed and we were unable to recover it. 00:24:13.920 [2024-04-24 22:15:56.136922] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.920 [2024-04-24 22:15:56.137054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.921 [2024-04-24 22:15:56.137082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.921 [2024-04-24 22:15:56.137097] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.921 [2024-04-24 22:15:56.137117] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.921 [2024-04-24 22:15:56.137149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.921 qpair failed and we were unable to recover it. 00:24:13.921 [2024-04-24 22:15:56.146964] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.921 [2024-04-24 22:15:56.147093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.921 [2024-04-24 22:15:56.147121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.921 [2024-04-24 22:15:56.147137] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.921 [2024-04-24 22:15:56.147149] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.921 [2024-04-24 22:15:56.147181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.921 qpair failed and we were unable to recover it. 00:24:13.921 [2024-04-24 22:15:56.156945] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.921 [2024-04-24 22:15:56.157105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.921 [2024-04-24 22:15:56.157134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.921 [2024-04-24 22:15:56.157150] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.921 [2024-04-24 22:15:56.157163] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.921 [2024-04-24 22:15:56.157194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.921 qpair failed and we were unable to recover it. 00:24:13.921 [2024-04-24 22:15:56.166990] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.921 [2024-04-24 22:15:56.167124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.921 [2024-04-24 22:15:56.167151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.921 [2024-04-24 22:15:56.167167] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.921 [2024-04-24 22:15:56.167181] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:13.921 [2024-04-24 22:15:56.167213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.921 qpair failed and we were unable to recover it. 00:24:14.180 [2024-04-24 22:15:56.177059] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.180 [2024-04-24 22:15:56.177233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.180 [2024-04-24 22:15:56.177267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.180 [2024-04-24 22:15:56.177283] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.180 [2024-04-24 22:15:56.177296] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.180 [2024-04-24 22:15:56.177328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.180 qpair failed and we were unable to recover it. 00:24:14.180 [2024-04-24 22:15:56.187062] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.180 [2024-04-24 22:15:56.187191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.180 [2024-04-24 22:15:56.187220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.180 [2024-04-24 22:15:56.187236] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.180 [2024-04-24 22:15:56.187248] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.180 [2024-04-24 22:15:56.187280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.180 qpair failed and we were unable to recover it. 00:24:14.180 [2024-04-24 22:15:56.197041] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.180 [2024-04-24 22:15:56.197168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.180 [2024-04-24 22:15:56.197196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.180 [2024-04-24 22:15:56.197212] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.180 [2024-04-24 22:15:56.197225] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.180 [2024-04-24 22:15:56.197257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.180 qpair failed and we were unable to recover it. 00:24:14.180 [2024-04-24 22:15:56.207094] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.180 [2024-04-24 22:15:56.207234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.180 [2024-04-24 22:15:56.207262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.180 [2024-04-24 22:15:56.207282] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.180 [2024-04-24 22:15:56.207295] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.180 [2024-04-24 22:15:56.207326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.180 qpair failed and we were unable to recover it. 00:24:14.180 [2024-04-24 22:15:56.217127] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.180 [2024-04-24 22:15:56.217256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.180 [2024-04-24 22:15:56.217284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.180 [2024-04-24 22:15:56.217300] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.180 [2024-04-24 22:15:56.217313] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.180 [2024-04-24 22:15:56.217345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.180 qpair failed and we were unable to recover it. 00:24:14.180 [2024-04-24 22:15:56.227165] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.180 [2024-04-24 22:15:56.227321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.180 [2024-04-24 22:15:56.227349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.180 [2024-04-24 22:15:56.227371] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.180 [2024-04-24 22:15:56.227385] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.180 [2024-04-24 22:15:56.227426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.180 qpair failed and we were unable to recover it. 00:24:14.180 [2024-04-24 22:15:56.237173] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.180 [2024-04-24 22:15:56.237303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.180 [2024-04-24 22:15:56.237331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.180 [2024-04-24 22:15:56.237347] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.180 [2024-04-24 22:15:56.237361] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.180 [2024-04-24 22:15:56.237402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.180 qpair failed and we were unable to recover it. 00:24:14.181 [2024-04-24 22:15:56.247258] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.181 [2024-04-24 22:15:56.247387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.181 [2024-04-24 22:15:56.247430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.181 [2024-04-24 22:15:56.247446] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.181 [2024-04-24 22:15:56.247459] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.181 [2024-04-24 22:15:56.247491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.181 qpair failed and we were unable to recover it. 00:24:14.181 [2024-04-24 22:15:56.257227] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.181 [2024-04-24 22:15:56.257359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.181 [2024-04-24 22:15:56.257387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.181 [2024-04-24 22:15:56.257412] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.181 [2024-04-24 22:15:56.257426] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.181 [2024-04-24 22:15:56.257458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.181 qpair failed and we were unable to recover it. 00:24:14.181 [2024-04-24 22:15:56.267278] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.181 [2024-04-24 22:15:56.267419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.181 [2024-04-24 22:15:56.267447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.181 [2024-04-24 22:15:56.267462] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.181 [2024-04-24 22:15:56.267475] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.181 [2024-04-24 22:15:56.267507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.181 qpair failed and we were unable to recover it. 00:24:14.181 [2024-04-24 22:15:56.277276] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.181 [2024-04-24 22:15:56.277403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.181 [2024-04-24 22:15:56.277431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.181 [2024-04-24 22:15:56.277447] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.181 [2024-04-24 22:15:56.277460] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.181 [2024-04-24 22:15:56.277491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.181 qpair failed and we were unable to recover it. 00:24:14.181 [2024-04-24 22:15:56.287409] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.181 [2024-04-24 22:15:56.287550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.181 [2024-04-24 22:15:56.287578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.181 [2024-04-24 22:15:56.287593] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.181 [2024-04-24 22:15:56.287606] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.181 [2024-04-24 22:15:56.287638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.181 qpair failed and we were unable to recover it. 00:24:14.181 [2024-04-24 22:15:56.297345] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.181 [2024-04-24 22:15:56.297485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.181 [2024-04-24 22:15:56.297513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.181 [2024-04-24 22:15:56.297529] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.181 [2024-04-24 22:15:56.297542] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.181 [2024-04-24 22:15:56.297574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.181 qpair failed and we were unable to recover it. 00:24:14.181 [2024-04-24 22:15:56.307449] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.181 [2024-04-24 22:15:56.307610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.181 [2024-04-24 22:15:56.307638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.181 [2024-04-24 22:15:56.307653] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.181 [2024-04-24 22:15:56.307666] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.181 [2024-04-24 22:15:56.307699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.181 qpair failed and we were unable to recover it. 00:24:14.181 [2024-04-24 22:15:56.317407] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.181 [2024-04-24 22:15:56.317535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.181 [2024-04-24 22:15:56.317569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.181 [2024-04-24 22:15:56.317586] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.181 [2024-04-24 22:15:56.317599] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.181 [2024-04-24 22:15:56.317631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.181 qpair failed and we were unable to recover it. 00:24:14.181 [2024-04-24 22:15:56.327455] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.181 [2024-04-24 22:15:56.327614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.181 [2024-04-24 22:15:56.327641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.181 [2024-04-24 22:15:56.327656] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.181 [2024-04-24 22:15:56.327669] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.181 [2024-04-24 22:15:56.327701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.181 qpair failed and we were unable to recover it. 00:24:14.181 [2024-04-24 22:15:56.337496] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.181 [2024-04-24 22:15:56.337627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.181 [2024-04-24 22:15:56.337658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.181 [2024-04-24 22:15:56.337673] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.181 [2024-04-24 22:15:56.337686] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.181 [2024-04-24 22:15:56.337718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.181 qpair failed and we were unable to recover it. 00:24:14.181 [2024-04-24 22:15:56.347517] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.181 [2024-04-24 22:15:56.347653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.181 [2024-04-24 22:15:56.347681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.181 [2024-04-24 22:15:56.347697] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.181 [2024-04-24 22:15:56.347710] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.181 [2024-04-24 22:15:56.347742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.181 qpair failed and we were unable to recover it. 00:24:14.181 [2024-04-24 22:15:56.357622] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.181 [2024-04-24 22:15:56.357754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.181 [2024-04-24 22:15:56.357782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.181 [2024-04-24 22:15:56.357798] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.181 [2024-04-24 22:15:56.357811] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.181 [2024-04-24 22:15:56.357849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.181 qpair failed and we were unable to recover it. 00:24:14.181 [2024-04-24 22:15:56.367573] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.181 [2024-04-24 22:15:56.367713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.181 [2024-04-24 22:15:56.367740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.181 [2024-04-24 22:15:56.367756] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.181 [2024-04-24 22:15:56.367770] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.181 [2024-04-24 22:15:56.367802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.181 qpair failed and we were unable to recover it. 00:24:14.181 [2024-04-24 22:15:56.377657] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.181 [2024-04-24 22:15:56.377848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.181 [2024-04-24 22:15:56.377875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.182 [2024-04-24 22:15:56.377891] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.182 [2024-04-24 22:15:56.377904] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.182 [2024-04-24 22:15:56.377935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.182 qpair failed and we were unable to recover it. 00:24:14.182 [2024-04-24 22:15:56.387600] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.182 [2024-04-24 22:15:56.387732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.182 [2024-04-24 22:15:56.387759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.182 [2024-04-24 22:15:56.387775] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.182 [2024-04-24 22:15:56.387788] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.182 [2024-04-24 22:15:56.387820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.182 qpair failed and we were unable to recover it. 00:24:14.182 [2024-04-24 22:15:56.397653] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.182 [2024-04-24 22:15:56.397820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.182 [2024-04-24 22:15:56.397849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.182 [2024-04-24 22:15:56.397865] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.182 [2024-04-24 22:15:56.397878] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.182 [2024-04-24 22:15:56.397910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.182 qpair failed and we were unable to recover it. 00:24:14.182 [2024-04-24 22:15:56.407792] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.182 [2024-04-24 22:15:56.407924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.182 [2024-04-24 22:15:56.407957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.182 [2024-04-24 22:15:56.407974] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.182 [2024-04-24 22:15:56.407987] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.182 [2024-04-24 22:15:56.408019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.182 qpair failed and we were unable to recover it. 00:24:14.182 [2024-04-24 22:15:56.417688] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.182 [2024-04-24 22:15:56.417820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.182 [2024-04-24 22:15:56.417849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.182 [2024-04-24 22:15:56.417865] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.182 [2024-04-24 22:15:56.417878] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.182 [2024-04-24 22:15:56.417910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.182 qpair failed and we were unable to recover it. 00:24:14.182 [2024-04-24 22:15:56.427742] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.182 [2024-04-24 22:15:56.427878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.182 [2024-04-24 22:15:56.427906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.182 [2024-04-24 22:15:56.427922] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.182 [2024-04-24 22:15:56.427935] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.182 [2024-04-24 22:15:56.427967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.182 qpair failed and we were unable to recover it. 00:24:14.441 [2024-04-24 22:15:56.437767] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.441 [2024-04-24 22:15:56.437917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.441 [2024-04-24 22:15:56.437944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.441 [2024-04-24 22:15:56.437960] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.441 [2024-04-24 22:15:56.437973] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.441 [2024-04-24 22:15:56.438005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.441 qpair failed and we were unable to recover it. 00:24:14.441 [2024-04-24 22:15:56.447811] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.441 [2024-04-24 22:15:56.447967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.441 [2024-04-24 22:15:56.447995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.441 [2024-04-24 22:15:56.448011] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.441 [2024-04-24 22:15:56.448031] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.441 [2024-04-24 22:15:56.448063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.441 qpair failed and we were unable to recover it. 00:24:14.441 [2024-04-24 22:15:56.457832] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.441 [2024-04-24 22:15:56.457965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.441 [2024-04-24 22:15:56.457994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.441 [2024-04-24 22:15:56.458010] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.441 [2024-04-24 22:15:56.458023] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.441 [2024-04-24 22:15:56.458055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.441 qpair failed and we were unable to recover it. 00:24:14.441 [2024-04-24 22:15:56.467871] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.441 [2024-04-24 22:15:56.468003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.441 [2024-04-24 22:15:56.468030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.441 [2024-04-24 22:15:56.468046] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.441 [2024-04-24 22:15:56.468059] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.442 [2024-04-24 22:15:56.468101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.442 qpair failed and we were unable to recover it. 00:24:14.442 [2024-04-24 22:15:56.477845] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.442 [2024-04-24 22:15:56.477976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.442 [2024-04-24 22:15:56.478004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.442 [2024-04-24 22:15:56.478020] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.442 [2024-04-24 22:15:56.478033] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.442 [2024-04-24 22:15:56.478064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.442 qpair failed and we were unable to recover it. 00:24:14.442 [2024-04-24 22:15:56.487928] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.442 [2024-04-24 22:15:56.488058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.442 [2024-04-24 22:15:56.488085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.442 [2024-04-24 22:15:56.488101] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.442 [2024-04-24 22:15:56.488114] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.442 [2024-04-24 22:15:56.488146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.442 qpair failed and we were unable to recover it. 00:24:14.442 [2024-04-24 22:15:56.497896] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.442 [2024-04-24 22:15:56.498034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.442 [2024-04-24 22:15:56.498062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.442 [2024-04-24 22:15:56.498078] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.442 [2024-04-24 22:15:56.498092] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.442 [2024-04-24 22:15:56.498124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.442 qpair failed and we were unable to recover it. 00:24:14.442 [2024-04-24 22:15:56.507969] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.442 [2024-04-24 22:15:56.508101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.442 [2024-04-24 22:15:56.508139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.442 [2024-04-24 22:15:56.508155] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.442 [2024-04-24 22:15:56.508168] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.442 [2024-04-24 22:15:56.508200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.442 qpair failed and we were unable to recover it. 00:24:14.442 [2024-04-24 22:15:56.517997] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.442 [2024-04-24 22:15:56.518124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.442 [2024-04-24 22:15:56.518152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.442 [2024-04-24 22:15:56.518168] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.442 [2024-04-24 22:15:56.518181] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.442 [2024-04-24 22:15:56.518219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.442 qpair failed and we were unable to recover it. 00:24:14.442 [2024-04-24 22:15:56.528059] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.442 [2024-04-24 22:15:56.528326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.442 [2024-04-24 22:15:56.528355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.442 [2024-04-24 22:15:56.528371] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.442 [2024-04-24 22:15:56.528385] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.442 [2024-04-24 22:15:56.528426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.442 qpair failed and we were unable to recover it. 00:24:14.442 [2024-04-24 22:15:56.538051] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.442 [2024-04-24 22:15:56.538262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.442 [2024-04-24 22:15:56.538291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.442 [2024-04-24 22:15:56.538306] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.442 [2024-04-24 22:15:56.538326] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.442 [2024-04-24 22:15:56.538360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.442 qpair failed and we were unable to recover it. 00:24:14.442 [2024-04-24 22:15:56.548109] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.442 [2024-04-24 22:15:56.548241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.442 [2024-04-24 22:15:56.548269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.442 [2024-04-24 22:15:56.548285] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.442 [2024-04-24 22:15:56.548297] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.442 [2024-04-24 22:15:56.548340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.442 qpair failed and we were unable to recover it. 00:24:14.442 [2024-04-24 22:15:56.558084] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.442 [2024-04-24 22:15:56.558204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.442 [2024-04-24 22:15:56.558233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.442 [2024-04-24 22:15:56.558249] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.442 [2024-04-24 22:15:56.558262] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.442 [2024-04-24 22:15:56.558293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.442 qpair failed and we were unable to recover it. 00:24:14.442 [2024-04-24 22:15:56.568166] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.442 [2024-04-24 22:15:56.568301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.442 [2024-04-24 22:15:56.568328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.442 [2024-04-24 22:15:56.568344] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.442 [2024-04-24 22:15:56.568357] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.442 [2024-04-24 22:15:56.568389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.442 qpair failed and we were unable to recover it. 00:24:14.442 [2024-04-24 22:15:56.578164] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.442 [2024-04-24 22:15:56.578351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.442 [2024-04-24 22:15:56.578380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.442 [2024-04-24 22:15:56.578403] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.442 [2024-04-24 22:15:56.578418] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.442 [2024-04-24 22:15:56.578459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.442 qpair failed and we were unable to recover it. 00:24:14.442 [2024-04-24 22:15:56.588160] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.442 [2024-04-24 22:15:56.588288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.442 [2024-04-24 22:15:56.588316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.442 [2024-04-24 22:15:56.588332] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.442 [2024-04-24 22:15:56.588345] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.442 [2024-04-24 22:15:56.588377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.442 qpair failed and we were unable to recover it. 00:24:14.442 [2024-04-24 22:15:56.598216] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.442 [2024-04-24 22:15:56.598344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.442 [2024-04-24 22:15:56.598373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.442 [2024-04-24 22:15:56.598389] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.442 [2024-04-24 22:15:56.598415] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.442 [2024-04-24 22:15:56.598448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.442 qpair failed and we were unable to recover it. 00:24:14.442 [2024-04-24 22:15:56.608262] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.443 [2024-04-24 22:15:56.608402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.443 [2024-04-24 22:15:56.608430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.443 [2024-04-24 22:15:56.608445] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.443 [2024-04-24 22:15:56.608459] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.443 [2024-04-24 22:15:56.608491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.443 qpair failed and we were unable to recover it. 00:24:14.443 [2024-04-24 22:15:56.618259] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.443 [2024-04-24 22:15:56.618385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.443 [2024-04-24 22:15:56.618421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.443 [2024-04-24 22:15:56.618438] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.443 [2024-04-24 22:15:56.618451] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.443 [2024-04-24 22:15:56.618483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.443 qpair failed and we were unable to recover it. 00:24:14.443 [2024-04-24 22:15:56.628338] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.443 [2024-04-24 22:15:56.628471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.443 [2024-04-24 22:15:56.628499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.443 [2024-04-24 22:15:56.628521] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.443 [2024-04-24 22:15:56.628535] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.443 [2024-04-24 22:15:56.628567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.443 qpair failed and we were unable to recover it. 00:24:14.443 [2024-04-24 22:15:56.638348] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.443 [2024-04-24 22:15:56.638497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.443 [2024-04-24 22:15:56.638526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.443 [2024-04-24 22:15:56.638541] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.443 [2024-04-24 22:15:56.638554] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.443 [2024-04-24 22:15:56.638585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.443 qpair failed and we were unable to recover it. 00:24:14.443 [2024-04-24 22:15:56.648388] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.443 [2024-04-24 22:15:56.648538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.443 [2024-04-24 22:15:56.648565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.443 [2024-04-24 22:15:56.648580] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.443 [2024-04-24 22:15:56.648593] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.443 [2024-04-24 22:15:56.648625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.443 qpair failed and we were unable to recover it. 00:24:14.443 [2024-04-24 22:15:56.658389] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.443 [2024-04-24 22:15:56.658539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.443 [2024-04-24 22:15:56.658567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.443 [2024-04-24 22:15:56.658583] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.443 [2024-04-24 22:15:56.658596] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.443 [2024-04-24 22:15:56.658628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.443 qpair failed and we were unable to recover it. 00:24:14.443 [2024-04-24 22:15:56.668417] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.443 [2024-04-24 22:15:56.668542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.443 [2024-04-24 22:15:56.668570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.443 [2024-04-24 22:15:56.668586] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.443 [2024-04-24 22:15:56.668600] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.443 [2024-04-24 22:15:56.668632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.443 qpair failed and we were unable to recover it. 00:24:14.443 [2024-04-24 22:15:56.678450] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.443 [2024-04-24 22:15:56.678575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.443 [2024-04-24 22:15:56.678603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.443 [2024-04-24 22:15:56.678619] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.443 [2024-04-24 22:15:56.678632] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.443 [2024-04-24 22:15:56.678664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.443 qpair failed and we were unable to recover it. 00:24:14.443 [2024-04-24 22:15:56.688522] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.443 [2024-04-24 22:15:56.688656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.443 [2024-04-24 22:15:56.688684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.443 [2024-04-24 22:15:56.688700] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.443 [2024-04-24 22:15:56.688714] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.443 [2024-04-24 22:15:56.688746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.443 qpair failed and we were unable to recover it. 00:24:14.703 [2024-04-24 22:15:56.698495] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.703 [2024-04-24 22:15:56.698629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.703 [2024-04-24 22:15:56.698657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.703 [2024-04-24 22:15:56.698673] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.703 [2024-04-24 22:15:56.698686] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.703 [2024-04-24 22:15:56.698717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.703 qpair failed and we were unable to recover it. 00:24:14.703 [2024-04-24 22:15:56.708510] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.703 [2024-04-24 22:15:56.708635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.703 [2024-04-24 22:15:56.708663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.703 [2024-04-24 22:15:56.708679] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.703 [2024-04-24 22:15:56.708692] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.703 [2024-04-24 22:15:56.708723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.703 qpair failed and we were unable to recover it. 00:24:14.703 [2024-04-24 22:15:56.718578] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.703 [2024-04-24 22:15:56.718733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.704 [2024-04-24 22:15:56.718766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.704 [2024-04-24 22:15:56.718783] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.704 [2024-04-24 22:15:56.718797] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.704 [2024-04-24 22:15:56.718829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.704 qpair failed and we were unable to recover it. 00:24:14.704 [2024-04-24 22:15:56.728590] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.704 [2024-04-24 22:15:56.728723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.704 [2024-04-24 22:15:56.728751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.704 [2024-04-24 22:15:56.728766] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.704 [2024-04-24 22:15:56.728780] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.704 [2024-04-24 22:15:56.728811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.704 qpair failed and we were unable to recover it. 00:24:14.704 [2024-04-24 22:15:56.738624] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.704 [2024-04-24 22:15:56.738752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.704 [2024-04-24 22:15:56.738779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.704 [2024-04-24 22:15:56.738795] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.704 [2024-04-24 22:15:56.738809] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.704 [2024-04-24 22:15:56.738841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.704 qpair failed and we were unable to recover it. 00:24:14.704 [2024-04-24 22:15:56.748625] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.704 [2024-04-24 22:15:56.748751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.704 [2024-04-24 22:15:56.748779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.704 [2024-04-24 22:15:56.748795] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.704 [2024-04-24 22:15:56.748808] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.704 [2024-04-24 22:15:56.748839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.704 qpair failed and we were unable to recover it. 00:24:14.704 [2024-04-24 22:15:56.758656] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.704 [2024-04-24 22:15:56.758787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.704 [2024-04-24 22:15:56.758815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.704 [2024-04-24 22:15:56.758830] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.704 [2024-04-24 22:15:56.758843] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.704 [2024-04-24 22:15:56.758881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.704 qpair failed and we were unable to recover it. 00:24:14.704 [2024-04-24 22:15:56.768694] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.704 [2024-04-24 22:15:56.768830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.704 [2024-04-24 22:15:56.768858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.704 [2024-04-24 22:15:56.768874] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.704 [2024-04-24 22:15:56.768888] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.704 [2024-04-24 22:15:56.768920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.704 qpair failed and we were unable to recover it. 00:24:14.704 [2024-04-24 22:15:56.778730] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.704 [2024-04-24 22:15:56.778854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.704 [2024-04-24 22:15:56.778882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.704 [2024-04-24 22:15:56.778898] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.704 [2024-04-24 22:15:56.778911] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.704 [2024-04-24 22:15:56.778943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.704 qpair failed and we were unable to recover it. 00:24:14.704 [2024-04-24 22:15:56.788720] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.704 [2024-04-24 22:15:56.788848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.704 [2024-04-24 22:15:56.788876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.704 [2024-04-24 22:15:56.788892] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.704 [2024-04-24 22:15:56.788905] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.704 [2024-04-24 22:15:56.788936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.704 qpair failed and we were unable to recover it. 00:24:14.704 [2024-04-24 22:15:56.798754] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.704 [2024-04-24 22:15:56.798885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.704 [2024-04-24 22:15:56.798913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.704 [2024-04-24 22:15:56.798929] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.704 [2024-04-24 22:15:56.798942] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.704 [2024-04-24 22:15:56.798973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.704 qpair failed and we were unable to recover it. 00:24:14.704 [2024-04-24 22:15:56.808894] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.704 [2024-04-24 22:15:56.809024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.704 [2024-04-24 22:15:56.809059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.704 [2024-04-24 22:15:56.809076] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.704 [2024-04-24 22:15:56.809089] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.704 [2024-04-24 22:15:56.809121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.704 qpair failed and we were unable to recover it. 00:24:14.704 [2024-04-24 22:15:56.818822] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.704 [2024-04-24 22:15:56.818989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.704 [2024-04-24 22:15:56.819017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.704 [2024-04-24 22:15:56.819033] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.704 [2024-04-24 22:15:56.819047] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.704 [2024-04-24 22:15:56.819079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.704 qpair failed and we were unable to recover it. 00:24:14.705 [2024-04-24 22:15:56.828842] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.705 [2024-04-24 22:15:56.828974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.705 [2024-04-24 22:15:56.829002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.705 [2024-04-24 22:15:56.829017] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.705 [2024-04-24 22:15:56.829031] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.705 [2024-04-24 22:15:56.829063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.705 qpair failed and we were unable to recover it. 00:24:14.705 [2024-04-24 22:15:56.838866] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.705 [2024-04-24 22:15:56.839013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.705 [2024-04-24 22:15:56.839040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.705 [2024-04-24 22:15:56.839056] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.705 [2024-04-24 22:15:56.839069] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.705 [2024-04-24 22:15:56.839100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.705 qpair failed and we were unable to recover it. 00:24:14.705 [2024-04-24 22:15:56.848960] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.705 [2024-04-24 22:15:56.849110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.705 [2024-04-24 22:15:56.849138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.705 [2024-04-24 22:15:56.849154] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.705 [2024-04-24 22:15:56.849168] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.705 [2024-04-24 22:15:56.849206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.705 qpair failed and we were unable to recover it. 00:24:14.705 [2024-04-24 22:15:56.858951] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.705 [2024-04-24 22:15:56.859127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.705 [2024-04-24 22:15:56.859155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.705 [2024-04-24 22:15:56.859171] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.705 [2024-04-24 22:15:56.859184] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.705 [2024-04-24 22:15:56.859216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.705 qpair failed and we were unable to recover it. 00:24:14.705 [2024-04-24 22:15:56.868963] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.705 [2024-04-24 22:15:56.869096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.705 [2024-04-24 22:15:56.869124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.705 [2024-04-24 22:15:56.869140] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.705 [2024-04-24 22:15:56.869153] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.705 [2024-04-24 22:15:56.869185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.705 qpair failed and we were unable to recover it. 00:24:14.705 [2024-04-24 22:15:56.879033] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.705 [2024-04-24 22:15:56.879200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.705 [2024-04-24 22:15:56.879227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.705 [2024-04-24 22:15:56.879243] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.705 [2024-04-24 22:15:56.879256] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.705 [2024-04-24 22:15:56.879288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.705 qpair failed and we were unable to recover it. 00:24:14.705 [2024-04-24 22:15:56.889060] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.705 [2024-04-24 22:15:56.889229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.705 [2024-04-24 22:15:56.889257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.705 [2024-04-24 22:15:56.889274] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.705 [2024-04-24 22:15:56.889287] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.705 [2024-04-24 22:15:56.889319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.705 qpair failed and we were unable to recover it. 00:24:14.705 [2024-04-24 22:15:56.899103] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.705 [2024-04-24 22:15:56.899235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.705 [2024-04-24 22:15:56.899262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.705 [2024-04-24 22:15:56.899278] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.705 [2024-04-24 22:15:56.899291] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.705 [2024-04-24 22:15:56.899323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.705 qpair failed and we were unable to recover it. 00:24:14.705 [2024-04-24 22:15:56.909075] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.705 [2024-04-24 22:15:56.909201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.705 [2024-04-24 22:15:56.909230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.705 [2024-04-24 22:15:56.909246] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.705 [2024-04-24 22:15:56.909259] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.705 [2024-04-24 22:15:56.909290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.705 qpair failed and we were unable to recover it. 00:24:14.705 [2024-04-24 22:15:56.919099] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.705 [2024-04-24 22:15:56.919276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.705 [2024-04-24 22:15:56.919304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.705 [2024-04-24 22:15:56.919320] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.705 [2024-04-24 22:15:56.919333] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.705 [2024-04-24 22:15:56.919364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.705 qpair failed and we were unable to recover it. 00:24:14.705 [2024-04-24 22:15:56.929152] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.705 [2024-04-24 22:15:56.929314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.705 [2024-04-24 22:15:56.929341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.706 [2024-04-24 22:15:56.929357] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.706 [2024-04-24 22:15:56.929370] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.706 [2024-04-24 22:15:56.929409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.706 qpair failed and we were unable to recover it. 00:24:14.706 [2024-04-24 22:15:56.939174] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.706 [2024-04-24 22:15:56.939307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.706 [2024-04-24 22:15:56.939334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.706 [2024-04-24 22:15:56.939350] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.706 [2024-04-24 22:15:56.939369] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.706 [2024-04-24 22:15:56.939410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.706 qpair failed and we were unable to recover it. 00:24:14.706 [2024-04-24 22:15:56.949228] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.706 [2024-04-24 22:15:56.949382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.706 [2024-04-24 22:15:56.949419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.706 [2024-04-24 22:15:56.949436] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.706 [2024-04-24 22:15:56.949449] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.706 [2024-04-24 22:15:56.949481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.706 qpair failed and we were unable to recover it. 00:24:14.966 [2024-04-24 22:15:56.959239] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.966 [2024-04-24 22:15:56.959413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.966 [2024-04-24 22:15:56.959442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.966 [2024-04-24 22:15:56.959458] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.966 [2024-04-24 22:15:56.959471] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.966 [2024-04-24 22:15:56.959503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.966 qpair failed and we were unable to recover it. 00:24:14.966 [2024-04-24 22:15:56.969272] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.966 [2024-04-24 22:15:56.969435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.966 [2024-04-24 22:15:56.969463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.966 [2024-04-24 22:15:56.969479] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.966 [2024-04-24 22:15:56.969492] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.966 [2024-04-24 22:15:56.969523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.966 qpair failed and we were unable to recover it. 00:24:14.966 [2024-04-24 22:15:56.979321] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.966 [2024-04-24 22:15:56.979460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.966 [2024-04-24 22:15:56.979488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.966 [2024-04-24 22:15:56.979504] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.966 [2024-04-24 22:15:56.979517] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.966 [2024-04-24 22:15:56.979549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.966 qpair failed and we were unable to recover it. 00:24:14.966 [2024-04-24 22:15:56.989410] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.966 [2024-04-24 22:15:56.989538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.966 [2024-04-24 22:15:56.989565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.966 [2024-04-24 22:15:56.989581] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.967 [2024-04-24 22:15:56.989594] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.967 [2024-04-24 22:15:56.989626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.967 qpair failed and we were unable to recover it. 00:24:14.967 [2024-04-24 22:15:56.999356] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.967 [2024-04-24 22:15:56.999492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.967 [2024-04-24 22:15:56.999519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.967 [2024-04-24 22:15:56.999535] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.967 [2024-04-24 22:15:56.999548] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.967 [2024-04-24 22:15:56.999580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.967 qpair failed and we were unable to recover it. 00:24:14.967 [2024-04-24 22:15:57.009427] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.967 [2024-04-24 22:15:57.009557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.967 [2024-04-24 22:15:57.009585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.967 [2024-04-24 22:15:57.009601] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.967 [2024-04-24 22:15:57.009614] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.967 [2024-04-24 22:15:57.009646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.967 qpair failed and we were unable to recover it. 00:24:14.967 [2024-04-24 22:15:57.019487] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.967 [2024-04-24 22:15:57.019635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.967 [2024-04-24 22:15:57.019663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.967 [2024-04-24 22:15:57.019679] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.967 [2024-04-24 22:15:57.019692] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.967 [2024-04-24 22:15:57.019724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.967 qpair failed and we were unable to recover it. 00:24:14.967 [2024-04-24 22:15:57.029455] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.967 [2024-04-24 22:15:57.029582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.967 [2024-04-24 22:15:57.029610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.967 [2024-04-24 22:15:57.029633] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.967 [2024-04-24 22:15:57.029647] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.967 [2024-04-24 22:15:57.029679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.967 qpair failed and we were unable to recover it. 00:24:14.967 [2024-04-24 22:15:57.039459] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.967 [2024-04-24 22:15:57.039637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.967 [2024-04-24 22:15:57.039664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.967 [2024-04-24 22:15:57.039680] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.967 [2024-04-24 22:15:57.039693] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.967 [2024-04-24 22:15:57.039725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.967 qpair failed and we were unable to recover it. 00:24:14.967 [2024-04-24 22:15:57.049524] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.967 [2024-04-24 22:15:57.049666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.967 [2024-04-24 22:15:57.049694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.967 [2024-04-24 22:15:57.049710] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.967 [2024-04-24 22:15:57.049723] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.967 [2024-04-24 22:15:57.049755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.967 qpair failed and we were unable to recover it. 00:24:14.967 [2024-04-24 22:15:57.059520] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.967 [2024-04-24 22:15:57.059654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.967 [2024-04-24 22:15:57.059682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.967 [2024-04-24 22:15:57.059697] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.967 [2024-04-24 22:15:57.059711] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.967 [2024-04-24 22:15:57.059743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.967 qpair failed and we were unable to recover it. 00:24:14.967 [2024-04-24 22:15:57.069567] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.967 [2024-04-24 22:15:57.069714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.967 [2024-04-24 22:15:57.069742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.967 [2024-04-24 22:15:57.069758] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.967 [2024-04-24 22:15:57.069770] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.967 [2024-04-24 22:15:57.069802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.967 qpair failed and we were unable to recover it. 00:24:14.967 [2024-04-24 22:15:57.079596] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.967 [2024-04-24 22:15:57.079725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.967 [2024-04-24 22:15:57.079752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.967 [2024-04-24 22:15:57.079768] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.967 [2024-04-24 22:15:57.079781] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.967 [2024-04-24 22:15:57.079812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.967 qpair failed and we were unable to recover it. 00:24:14.967 [2024-04-24 22:15:57.089624] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.967 [2024-04-24 22:15:57.089755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.967 [2024-04-24 22:15:57.089782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.967 [2024-04-24 22:15:57.089798] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.967 [2024-04-24 22:15:57.089811] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.967 [2024-04-24 22:15:57.089843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.967 qpair failed and we were unable to recover it. 00:24:14.967 [2024-04-24 22:15:57.099649] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.967 [2024-04-24 22:15:57.099776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.967 [2024-04-24 22:15:57.099804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.967 [2024-04-24 22:15:57.099819] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.967 [2024-04-24 22:15:57.099833] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.967 [2024-04-24 22:15:57.099864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.967 qpair failed and we were unable to recover it. 00:24:14.967 [2024-04-24 22:15:57.109772] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.967 [2024-04-24 22:15:57.109900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.967 [2024-04-24 22:15:57.109928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.967 [2024-04-24 22:15:57.109944] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.967 [2024-04-24 22:15:57.109957] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.967 [2024-04-24 22:15:57.109988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.967 qpair failed and we were unable to recover it. 00:24:14.967 [2024-04-24 22:15:57.119679] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.967 [2024-04-24 22:15:57.119801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.967 [2024-04-24 22:15:57.119836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.967 [2024-04-24 22:15:57.119853] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.967 [2024-04-24 22:15:57.119866] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.967 [2024-04-24 22:15:57.119898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.967 qpair failed and we were unable to recover it. 00:24:14.967 [2024-04-24 22:15:57.129742] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.968 [2024-04-24 22:15:57.129876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.968 [2024-04-24 22:15:57.129904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.968 [2024-04-24 22:15:57.129920] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.968 [2024-04-24 22:15:57.129933] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.968 [2024-04-24 22:15:57.129965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.968 qpair failed and we were unable to recover it. 00:24:14.968 [2024-04-24 22:15:57.139729] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.968 [2024-04-24 22:15:57.139864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.968 [2024-04-24 22:15:57.139892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.968 [2024-04-24 22:15:57.139908] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.968 [2024-04-24 22:15:57.139921] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.968 [2024-04-24 22:15:57.139952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.968 qpair failed and we were unable to recover it. 00:24:14.968 [2024-04-24 22:15:57.149831] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.968 [2024-04-24 22:15:57.149956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.968 [2024-04-24 22:15:57.149983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.968 [2024-04-24 22:15:57.149999] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.968 [2024-04-24 22:15:57.150012] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.968 [2024-04-24 22:15:57.150044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.968 qpair failed and we were unable to recover it. 00:24:14.968 [2024-04-24 22:15:57.159890] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.968 [2024-04-24 22:15:57.160024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.968 [2024-04-24 22:15:57.160051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.968 [2024-04-24 22:15:57.160067] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.968 [2024-04-24 22:15:57.160080] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.968 [2024-04-24 22:15:57.160112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.968 qpair failed and we were unable to recover it. 00:24:14.968 [2024-04-24 22:15:57.169857] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.968 [2024-04-24 22:15:57.169991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.968 [2024-04-24 22:15:57.170019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.968 [2024-04-24 22:15:57.170034] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.968 [2024-04-24 22:15:57.170047] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.968 [2024-04-24 22:15:57.170079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.968 qpair failed and we were unable to recover it. 00:24:14.968 [2024-04-24 22:15:57.179855] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.968 [2024-04-24 22:15:57.179981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.968 [2024-04-24 22:15:57.180009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.968 [2024-04-24 22:15:57.180025] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.968 [2024-04-24 22:15:57.180038] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.968 [2024-04-24 22:15:57.180070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.968 qpair failed and we were unable to recover it. 00:24:14.968 [2024-04-24 22:15:57.189895] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.968 [2024-04-24 22:15:57.190063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.968 [2024-04-24 22:15:57.190090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.968 [2024-04-24 22:15:57.190106] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.968 [2024-04-24 22:15:57.190120] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.968 [2024-04-24 22:15:57.190151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.968 qpair failed and we were unable to recover it. 00:24:14.968 [2024-04-24 22:15:57.199914] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.968 [2024-04-24 22:15:57.200042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.968 [2024-04-24 22:15:57.200069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.968 [2024-04-24 22:15:57.200085] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.968 [2024-04-24 22:15:57.200099] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.968 [2024-04-24 22:15:57.200130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.968 qpair failed and we were unable to recover it. 00:24:14.968 [2024-04-24 22:15:57.209943] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.968 [2024-04-24 22:15:57.210082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.968 [2024-04-24 22:15:57.210116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.968 [2024-04-24 22:15:57.210133] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.968 [2024-04-24 22:15:57.210147] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.968 [2024-04-24 22:15:57.210179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.968 qpair failed and we were unable to recover it. 00:24:14.968 [2024-04-24 22:15:57.220027] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.968 [2024-04-24 22:15:57.220167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.968 [2024-04-24 22:15:57.220194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.968 [2024-04-24 22:15:57.220210] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.968 [2024-04-24 22:15:57.220223] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:14.968 [2024-04-24 22:15:57.220255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:14.968 qpair failed and we were unable to recover it. 00:24:15.228 [2024-04-24 22:15:57.229983] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.228 [2024-04-24 22:15:57.230133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.228 [2024-04-24 22:15:57.230162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.228 [2024-04-24 22:15:57.230178] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.228 [2024-04-24 22:15:57.230191] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.228 [2024-04-24 22:15:57.230223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.228 qpair failed and we were unable to recover it. 00:24:15.228 [2024-04-24 22:15:57.240067] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.228 [2024-04-24 22:15:57.240193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.228 [2024-04-24 22:15:57.240220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.228 [2024-04-24 22:15:57.240236] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.228 [2024-04-24 22:15:57.240249] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.228 [2024-04-24 22:15:57.240280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.228 qpair failed and we were unable to recover it. 00:24:15.228 [2024-04-24 22:15:57.250076] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.228 [2024-04-24 22:15:57.250232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.228 [2024-04-24 22:15:57.250260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.228 [2024-04-24 22:15:57.250276] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.228 [2024-04-24 22:15:57.250289] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.228 [2024-04-24 22:15:57.250327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.228 qpair failed and we were unable to recover it. 00:24:15.228 [2024-04-24 22:15:57.260147] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.228 [2024-04-24 22:15:57.260311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.228 [2024-04-24 22:15:57.260339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.228 [2024-04-24 22:15:57.260355] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.228 [2024-04-24 22:15:57.260368] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.228 [2024-04-24 22:15:57.260407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.228 qpair failed and we were unable to recover it. 00:24:15.228 [2024-04-24 22:15:57.270146] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.228 [2024-04-24 22:15:57.270268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.228 [2024-04-24 22:15:57.270296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.228 [2024-04-24 22:15:57.270311] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.228 [2024-04-24 22:15:57.270324] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.228 [2024-04-24 22:15:57.270356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.228 qpair failed and we were unable to recover it. 00:24:15.228 [2024-04-24 22:15:57.280133] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.228 [2024-04-24 22:15:57.280268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.228 [2024-04-24 22:15:57.280296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.228 [2024-04-24 22:15:57.280312] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.228 [2024-04-24 22:15:57.280325] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.228 [2024-04-24 22:15:57.280356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.228 qpair failed and we were unable to recover it. 00:24:15.228 [2024-04-24 22:15:57.290204] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.228 [2024-04-24 22:15:57.290342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.228 [2024-04-24 22:15:57.290370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.228 [2024-04-24 22:15:57.290386] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.228 [2024-04-24 22:15:57.290407] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.228 [2024-04-24 22:15:57.290440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.228 qpair failed and we were unable to recover it. 00:24:15.228 [2024-04-24 22:15:57.300224] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.228 [2024-04-24 22:15:57.300356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.228 [2024-04-24 22:15:57.300390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.228 [2024-04-24 22:15:57.300414] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.228 [2024-04-24 22:15:57.300428] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.228 [2024-04-24 22:15:57.300460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.228 qpair failed and we were unable to recover it. 00:24:15.228 [2024-04-24 22:15:57.310215] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.228 [2024-04-24 22:15:57.310343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.228 [2024-04-24 22:15:57.310372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.228 [2024-04-24 22:15:57.310387] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.228 [2024-04-24 22:15:57.310410] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.228 [2024-04-24 22:15:57.310443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.228 qpair failed and we were unable to recover it. 00:24:15.228 [2024-04-24 22:15:57.320356] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.228 [2024-04-24 22:15:57.320517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.228 [2024-04-24 22:15:57.320545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.228 [2024-04-24 22:15:57.320561] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.228 [2024-04-24 22:15:57.320575] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.228 [2024-04-24 22:15:57.320606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.228 qpair failed and we were unable to recover it. 00:24:15.228 [2024-04-24 22:15:57.330326] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.228 [2024-04-24 22:15:57.330466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.229 [2024-04-24 22:15:57.330494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.229 [2024-04-24 22:15:57.330509] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.229 [2024-04-24 22:15:57.330522] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.229 [2024-04-24 22:15:57.330555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.229 qpair failed and we were unable to recover it. 00:24:15.229 [2024-04-24 22:15:57.340319] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.229 [2024-04-24 22:15:57.340457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.229 [2024-04-24 22:15:57.340485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.229 [2024-04-24 22:15:57.340500] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.229 [2024-04-24 22:15:57.340519] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.229 [2024-04-24 22:15:57.340552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.229 qpair failed and we were unable to recover it. 00:24:15.229 [2024-04-24 22:15:57.350353] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.229 [2024-04-24 22:15:57.350483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.229 [2024-04-24 22:15:57.350515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.229 [2024-04-24 22:15:57.350531] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.229 [2024-04-24 22:15:57.350544] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.229 [2024-04-24 22:15:57.350575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.229 qpair failed and we were unable to recover it. 00:24:15.229 [2024-04-24 22:15:57.360466] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.229 [2024-04-24 22:15:57.360601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.229 [2024-04-24 22:15:57.360629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.229 [2024-04-24 22:15:57.360645] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.229 [2024-04-24 22:15:57.360658] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.229 [2024-04-24 22:15:57.360690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.229 qpair failed and we were unable to recover it. 00:24:15.229 [2024-04-24 22:15:57.370447] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.229 [2024-04-24 22:15:57.370634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.229 [2024-04-24 22:15:57.370663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.229 [2024-04-24 22:15:57.370679] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.229 [2024-04-24 22:15:57.370692] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.229 [2024-04-24 22:15:57.370725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.229 qpair failed and we were unable to recover it. 00:24:15.229 [2024-04-24 22:15:57.380452] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.229 [2024-04-24 22:15:57.380582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.229 [2024-04-24 22:15:57.380611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.229 [2024-04-24 22:15:57.380627] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.229 [2024-04-24 22:15:57.380639] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.229 [2024-04-24 22:15:57.380671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.229 qpair failed and we were unable to recover it. 00:24:15.229 [2024-04-24 22:15:57.390463] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.229 [2024-04-24 22:15:57.390600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.229 [2024-04-24 22:15:57.390627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.229 [2024-04-24 22:15:57.390643] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.229 [2024-04-24 22:15:57.390657] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.229 [2024-04-24 22:15:57.390688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.229 qpair failed and we were unable to recover it. 00:24:15.229 [2024-04-24 22:15:57.400496] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.229 [2024-04-24 22:15:57.400632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.229 [2024-04-24 22:15:57.400660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.229 [2024-04-24 22:15:57.400676] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.229 [2024-04-24 22:15:57.400689] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.229 [2024-04-24 22:15:57.400720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.229 qpair failed and we were unable to recover it. 00:24:15.229 [2024-04-24 22:15:57.410539] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.229 [2024-04-24 22:15:57.410668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.229 [2024-04-24 22:15:57.410695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.229 [2024-04-24 22:15:57.410711] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.229 [2024-04-24 22:15:57.410725] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.229 [2024-04-24 22:15:57.410756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.229 qpair failed and we were unable to recover it. 00:24:15.229 [2024-04-24 22:15:57.420546] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.229 [2024-04-24 22:15:57.420678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.229 [2024-04-24 22:15:57.420706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.229 [2024-04-24 22:15:57.420722] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.229 [2024-04-24 22:15:57.420735] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.229 [2024-04-24 22:15:57.420767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.229 qpair failed and we were unable to recover it. 00:24:15.229 [2024-04-24 22:15:57.430577] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.229 [2024-04-24 22:15:57.430704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.229 [2024-04-24 22:15:57.430732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.229 [2024-04-24 22:15:57.430754] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.229 [2024-04-24 22:15:57.430768] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.229 [2024-04-24 22:15:57.430800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.229 qpair failed and we were unable to recover it. 00:24:15.229 [2024-04-24 22:15:57.440613] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.229 [2024-04-24 22:15:57.440739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.229 [2024-04-24 22:15:57.440767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.229 [2024-04-24 22:15:57.440783] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.229 [2024-04-24 22:15:57.440796] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.229 [2024-04-24 22:15:57.440828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.229 qpair failed and we were unable to recover it. 00:24:15.229 [2024-04-24 22:15:57.450689] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.229 [2024-04-24 22:15:57.450851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.229 [2024-04-24 22:15:57.450878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.229 [2024-04-24 22:15:57.450894] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.229 [2024-04-24 22:15:57.450907] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.229 [2024-04-24 22:15:57.450939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.229 qpair failed and we were unable to recover it. 00:24:15.229 [2024-04-24 22:15:57.460698] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.229 [2024-04-24 22:15:57.460830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.229 [2024-04-24 22:15:57.460858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.229 [2024-04-24 22:15:57.460874] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.230 [2024-04-24 22:15:57.460887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.230 [2024-04-24 22:15:57.460919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.230 qpair failed and we were unable to recover it. 00:24:15.230 [2024-04-24 22:15:57.470693] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.230 [2024-04-24 22:15:57.470822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.230 [2024-04-24 22:15:57.470851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.230 [2024-04-24 22:15:57.470867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.230 [2024-04-24 22:15:57.470880] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.230 [2024-04-24 22:15:57.470911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.230 qpair failed and we were unable to recover it. 00:24:15.230 [2024-04-24 22:15:57.480757] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.230 [2024-04-24 22:15:57.480897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.230 [2024-04-24 22:15:57.480924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.230 [2024-04-24 22:15:57.480940] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.230 [2024-04-24 22:15:57.480953] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.230 [2024-04-24 22:15:57.480986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.230 qpair failed and we were unable to recover it. 00:24:15.489 [2024-04-24 22:15:57.490766] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.489 [2024-04-24 22:15:57.490897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.489 [2024-04-24 22:15:57.490925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.489 [2024-04-24 22:15:57.490941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.489 [2024-04-24 22:15:57.490954] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.489 [2024-04-24 22:15:57.490986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.489 qpair failed and we were unable to recover it. 00:24:15.489 [2024-04-24 22:15:57.500796] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.489 [2024-04-24 22:15:57.500928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.489 [2024-04-24 22:15:57.500956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.489 [2024-04-24 22:15:57.500971] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.489 [2024-04-24 22:15:57.500985] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.489 [2024-04-24 22:15:57.501016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.489 qpair failed and we were unable to recover it. 00:24:15.489 [2024-04-24 22:15:57.510897] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.489 [2024-04-24 22:15:57.511022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.489 [2024-04-24 22:15:57.511050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.489 [2024-04-24 22:15:57.511065] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.489 [2024-04-24 22:15:57.511079] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.489 [2024-04-24 22:15:57.511110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.489 qpair failed and we were unable to recover it. 00:24:15.489 [2024-04-24 22:15:57.520909] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.489 [2024-04-24 22:15:57.521036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.489 [2024-04-24 22:15:57.521064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.489 [2024-04-24 22:15:57.521085] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.489 [2024-04-24 22:15:57.521100] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.489 [2024-04-24 22:15:57.521131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.489 qpair failed and we were unable to recover it. 00:24:15.489 [2024-04-24 22:15:57.530994] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.489 [2024-04-24 22:15:57.531133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.489 [2024-04-24 22:15:57.531160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.489 [2024-04-24 22:15:57.531176] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.489 [2024-04-24 22:15:57.531189] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.489 [2024-04-24 22:15:57.531221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.489 qpair failed and we were unable to recover it. 00:24:15.489 [2024-04-24 22:15:57.540923] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.490 [2024-04-24 22:15:57.541096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.490 [2024-04-24 22:15:57.541123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.490 [2024-04-24 22:15:57.541139] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.490 [2024-04-24 22:15:57.541152] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.490 [2024-04-24 22:15:57.541183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.490 qpair failed and we were unable to recover it. 00:24:15.490 [2024-04-24 22:15:57.550963] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.490 [2024-04-24 22:15:57.551087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.490 [2024-04-24 22:15:57.551114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.490 [2024-04-24 22:15:57.551130] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.490 [2024-04-24 22:15:57.551144] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.490 [2024-04-24 22:15:57.551175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.490 qpair failed and we were unable to recover it. 00:24:15.490 [2024-04-24 22:15:57.560969] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.490 [2024-04-24 22:15:57.561095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.490 [2024-04-24 22:15:57.561123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.490 [2024-04-24 22:15:57.561138] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.490 [2024-04-24 22:15:57.561151] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.490 [2024-04-24 22:15:57.561183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.490 qpair failed and we were unable to recover it. 00:24:15.490 [2024-04-24 22:15:57.571041] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.490 [2024-04-24 22:15:57.571174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.490 [2024-04-24 22:15:57.571208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.490 [2024-04-24 22:15:57.571223] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.490 [2024-04-24 22:15:57.571236] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.490 [2024-04-24 22:15:57.571269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.490 qpair failed and we were unable to recover it. 00:24:15.490 [2024-04-24 22:15:57.581076] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.490 [2024-04-24 22:15:57.581223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.490 [2024-04-24 22:15:57.581251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.490 [2024-04-24 22:15:57.581266] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.490 [2024-04-24 22:15:57.581279] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.490 [2024-04-24 22:15:57.581310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.490 qpair failed and we were unable to recover it. 00:24:15.490 [2024-04-24 22:15:57.591046] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.490 [2024-04-24 22:15:57.591167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.490 [2024-04-24 22:15:57.591195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.490 [2024-04-24 22:15:57.591211] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.490 [2024-04-24 22:15:57.591225] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.490 [2024-04-24 22:15:57.591257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.490 qpair failed and we were unable to recover it. 00:24:15.490 [2024-04-24 22:15:57.601088] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.490 [2024-04-24 22:15:57.601218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.490 [2024-04-24 22:15:57.601246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.490 [2024-04-24 22:15:57.601262] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.490 [2024-04-24 22:15:57.601275] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.490 [2024-04-24 22:15:57.601307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.490 qpair failed and we were unable to recover it. 00:24:15.490 [2024-04-24 22:15:57.611145] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.490 [2024-04-24 22:15:57.611303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.490 [2024-04-24 22:15:57.611337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.490 [2024-04-24 22:15:57.611354] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.490 [2024-04-24 22:15:57.611367] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.490 [2024-04-24 22:15:57.611406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.490 qpair failed and we were unable to recover it. 00:24:15.490 [2024-04-24 22:15:57.621171] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.490 [2024-04-24 22:15:57.621388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.490 [2024-04-24 22:15:57.621423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.490 [2024-04-24 22:15:57.621438] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.490 [2024-04-24 22:15:57.621452] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.490 [2024-04-24 22:15:57.621484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.490 qpair failed and we were unable to recover it. 00:24:15.490 [2024-04-24 22:15:57.631185] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.490 [2024-04-24 22:15:57.631317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.490 [2024-04-24 22:15:57.631345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.490 [2024-04-24 22:15:57.631361] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.490 [2024-04-24 22:15:57.631374] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.490 [2024-04-24 22:15:57.631414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.490 qpair failed and we were unable to recover it. 00:24:15.490 [2024-04-24 22:15:57.641256] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.490 [2024-04-24 22:15:57.641437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.490 [2024-04-24 22:15:57.641466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.490 [2024-04-24 22:15:57.641482] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.490 [2024-04-24 22:15:57.641494] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.490 [2024-04-24 22:15:57.641526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.490 qpair failed and we were unable to recover it. 00:24:15.490 [2024-04-24 22:15:57.651267] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.490 [2024-04-24 22:15:57.651423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.490 [2024-04-24 22:15:57.651450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.490 [2024-04-24 22:15:57.651466] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.490 [2024-04-24 22:15:57.651479] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.490 [2024-04-24 22:15:57.651518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.490 qpair failed and we were unable to recover it. 00:24:15.490 [2024-04-24 22:15:57.661277] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.490 [2024-04-24 22:15:57.661412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.490 [2024-04-24 22:15:57.661442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.490 [2024-04-24 22:15:57.661458] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.490 [2024-04-24 22:15:57.661471] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.490 [2024-04-24 22:15:57.661504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.490 qpair failed and we were unable to recover it. 00:24:15.490 [2024-04-24 22:15:57.671332] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.490 [2024-04-24 22:15:57.671512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.491 [2024-04-24 22:15:57.671541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.491 [2024-04-24 22:15:57.671557] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.491 [2024-04-24 22:15:57.671570] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.491 [2024-04-24 22:15:57.671607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.491 qpair failed and we were unable to recover it. 00:24:15.491 [2024-04-24 22:15:57.681291] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.491 [2024-04-24 22:15:57.681439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.491 [2024-04-24 22:15:57.681467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.491 [2024-04-24 22:15:57.681483] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.491 [2024-04-24 22:15:57.681496] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.491 [2024-04-24 22:15:57.681528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.491 qpair failed and we were unable to recover it. 00:24:15.491 [2024-04-24 22:15:57.691437] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.491 [2024-04-24 22:15:57.691580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.491 [2024-04-24 22:15:57.691608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.491 [2024-04-24 22:15:57.691624] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.491 [2024-04-24 22:15:57.691638] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.491 [2024-04-24 22:15:57.691669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.491 qpair failed and we were unable to recover it. 00:24:15.491 [2024-04-24 22:15:57.701440] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.491 [2024-04-24 22:15:57.701578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.491 [2024-04-24 22:15:57.701612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.491 [2024-04-24 22:15:57.701629] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.491 [2024-04-24 22:15:57.701642] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.491 [2024-04-24 22:15:57.701674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.491 qpair failed and we were unable to recover it. 00:24:15.491 [2024-04-24 22:15:57.711427] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.491 [2024-04-24 22:15:57.711563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.491 [2024-04-24 22:15:57.711592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.491 [2024-04-24 22:15:57.711608] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.491 [2024-04-24 22:15:57.711622] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.491 [2024-04-24 22:15:57.711653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.491 qpair failed and we were unable to recover it. 00:24:15.491 [2024-04-24 22:15:57.721415] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.491 [2024-04-24 22:15:57.721545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.491 [2024-04-24 22:15:57.721574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.491 [2024-04-24 22:15:57.721589] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.491 [2024-04-24 22:15:57.721602] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.491 [2024-04-24 22:15:57.721634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.491 qpair failed and we were unable to recover it. 00:24:15.491 [2024-04-24 22:15:57.731470] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.491 [2024-04-24 22:15:57.731606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.491 [2024-04-24 22:15:57.731634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.491 [2024-04-24 22:15:57.731649] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.491 [2024-04-24 22:15:57.731662] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.491 [2024-04-24 22:15:57.731694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.491 qpair failed and we were unable to recover it. 00:24:15.491 [2024-04-24 22:15:57.741510] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.491 [2024-04-24 22:15:57.741651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.491 [2024-04-24 22:15:57.741680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.491 [2024-04-24 22:15:57.741696] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.491 [2024-04-24 22:15:57.741716] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.491 [2024-04-24 22:15:57.741749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.491 qpair failed and we were unable to recover it. 00:24:15.751 [2024-04-24 22:15:57.751498] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.751 [2024-04-24 22:15:57.751631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.751 [2024-04-24 22:15:57.751659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.751 [2024-04-24 22:15:57.751676] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.751 [2024-04-24 22:15:57.751690] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.751 [2024-04-24 22:15:57.751721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.751 qpair failed and we were unable to recover it. 00:24:15.751 [2024-04-24 22:15:57.761553] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.751 [2024-04-24 22:15:57.761680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.751 [2024-04-24 22:15:57.761708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.751 [2024-04-24 22:15:57.761724] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.751 [2024-04-24 22:15:57.761738] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.751 [2024-04-24 22:15:57.761770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.751 qpair failed and we were unable to recover it. 00:24:15.751 [2024-04-24 22:15:57.771626] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.751 [2024-04-24 22:15:57.771771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.751 [2024-04-24 22:15:57.771799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.751 [2024-04-24 22:15:57.771815] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.751 [2024-04-24 22:15:57.771829] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.751 [2024-04-24 22:15:57.771861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.751 qpair failed and we were unable to recover it. 00:24:15.751 [2024-04-24 22:15:57.781603] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.752 [2024-04-24 22:15:57.781734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.752 [2024-04-24 22:15:57.781762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.752 [2024-04-24 22:15:57.781778] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.752 [2024-04-24 22:15:57.781792] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.752 [2024-04-24 22:15:57.781824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.752 qpair failed and we were unable to recover it. 00:24:15.752 [2024-04-24 22:15:57.791625] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.752 [2024-04-24 22:15:57.791784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.752 [2024-04-24 22:15:57.791812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.752 [2024-04-24 22:15:57.791828] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.752 [2024-04-24 22:15:57.791841] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.752 [2024-04-24 22:15:57.791873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.752 qpair failed and we were unable to recover it. 00:24:15.752 [2024-04-24 22:15:57.801660] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.752 [2024-04-24 22:15:57.801783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.752 [2024-04-24 22:15:57.801811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.752 [2024-04-24 22:15:57.801827] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.752 [2024-04-24 22:15:57.801840] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.752 [2024-04-24 22:15:57.801872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.752 qpair failed and we were unable to recover it. 00:24:15.752 [2024-04-24 22:15:57.811681] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.752 [2024-04-24 22:15:57.811815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.752 [2024-04-24 22:15:57.811843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.752 [2024-04-24 22:15:57.811859] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.752 [2024-04-24 22:15:57.811872] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.752 [2024-04-24 22:15:57.811903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.752 qpair failed and we were unable to recover it. 00:24:15.752 [2024-04-24 22:15:57.821766] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.752 [2024-04-24 22:15:57.821896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.752 [2024-04-24 22:15:57.821923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.752 [2024-04-24 22:15:57.821939] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.752 [2024-04-24 22:15:57.821953] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.752 [2024-04-24 22:15:57.821985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.752 qpair failed and we were unable to recover it. 00:24:15.752 [2024-04-24 22:15:57.831750] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.752 [2024-04-24 22:15:57.831882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.752 [2024-04-24 22:15:57.831910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.752 [2024-04-24 22:15:57.831926] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.752 [2024-04-24 22:15:57.831953] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.752 [2024-04-24 22:15:57.831987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.752 qpair failed and we were unable to recover it. 00:24:15.752 [2024-04-24 22:15:57.841799] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.752 [2024-04-24 22:15:57.841926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.752 [2024-04-24 22:15:57.841955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.752 [2024-04-24 22:15:57.841970] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.752 [2024-04-24 22:15:57.841983] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.752 [2024-04-24 22:15:57.842015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.752 qpair failed and we were unable to recover it. 00:24:15.752 [2024-04-24 22:15:57.851804] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.752 [2024-04-24 22:15:57.851934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.752 [2024-04-24 22:15:57.851960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.752 [2024-04-24 22:15:57.851976] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.752 [2024-04-24 22:15:57.851990] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.752 [2024-04-24 22:15:57.852021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.752 qpair failed and we were unable to recover it. 00:24:15.752 [2024-04-24 22:15:57.861828] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.752 [2024-04-24 22:15:57.861977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.752 [2024-04-24 22:15:57.862011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.752 [2024-04-24 22:15:57.862027] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.752 [2024-04-24 22:15:57.862040] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.752 [2024-04-24 22:15:57.862072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.752 qpair failed and we were unable to recover it. 00:24:15.752 [2024-04-24 22:15:57.871882] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.752 [2024-04-24 22:15:57.872019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.752 [2024-04-24 22:15:57.872047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.752 [2024-04-24 22:15:57.872063] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.752 [2024-04-24 22:15:57.872076] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.752 [2024-04-24 22:15:57.872108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.752 qpair failed and we were unable to recover it. 00:24:15.752 [2024-04-24 22:15:57.881919] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.752 [2024-04-24 22:15:57.882047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.752 [2024-04-24 22:15:57.882074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.752 [2024-04-24 22:15:57.882090] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.752 [2024-04-24 22:15:57.882103] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.752 [2024-04-24 22:15:57.882134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.752 qpair failed and we were unable to recover it. 00:24:15.752 [2024-04-24 22:15:57.891959] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.752 [2024-04-24 22:15:57.892098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.752 [2024-04-24 22:15:57.892126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.752 [2024-04-24 22:15:57.892141] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.752 [2024-04-24 22:15:57.892154] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.752 [2024-04-24 22:15:57.892186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.752 qpair failed and we were unable to recover it. 00:24:15.752 [2024-04-24 22:15:57.901997] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.752 [2024-04-24 22:15:57.902156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.752 [2024-04-24 22:15:57.902184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.752 [2024-04-24 22:15:57.902200] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.752 [2024-04-24 22:15:57.902213] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.752 [2024-04-24 22:15:57.902245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.752 qpair failed and we were unable to recover it. 00:24:15.752 [2024-04-24 22:15:57.911991] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.752 [2024-04-24 22:15:57.912120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.752 [2024-04-24 22:15:57.912147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.753 [2024-04-24 22:15:57.912163] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.753 [2024-04-24 22:15:57.912176] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.753 [2024-04-24 22:15:57.912207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.753 qpair failed and we were unable to recover it. 00:24:15.753 [2024-04-24 22:15:57.921997] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.753 [2024-04-24 22:15:57.922124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.753 [2024-04-24 22:15:57.922152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.753 [2024-04-24 22:15:57.922180] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.753 [2024-04-24 22:15:57.922194] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.753 [2024-04-24 22:15:57.922226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.753 qpair failed and we were unable to recover it. 00:24:15.753 [2024-04-24 22:15:57.932097] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.753 [2024-04-24 22:15:57.932228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.753 [2024-04-24 22:15:57.932258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.753 [2024-04-24 22:15:57.932275] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.753 [2024-04-24 22:15:57.932289] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.753 [2024-04-24 22:15:57.932321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.753 qpair failed and we were unable to recover it. 00:24:15.753 [2024-04-24 22:15:57.942068] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.753 [2024-04-24 22:15:57.942197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.753 [2024-04-24 22:15:57.942225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.753 [2024-04-24 22:15:57.942241] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.753 [2024-04-24 22:15:57.942254] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.753 [2024-04-24 22:15:57.942286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.753 qpair failed and we were unable to recover it. 00:24:15.753 [2024-04-24 22:15:57.952191] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.753 [2024-04-24 22:15:57.952314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.753 [2024-04-24 22:15:57.952342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.753 [2024-04-24 22:15:57.952357] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.753 [2024-04-24 22:15:57.952373] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.753 [2024-04-24 22:15:57.952412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.753 qpair failed and we were unable to recover it. 00:24:15.753 [2024-04-24 22:15:57.962088] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.753 [2024-04-24 22:15:57.962217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.753 [2024-04-24 22:15:57.962245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.753 [2024-04-24 22:15:57.962260] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.753 [2024-04-24 22:15:57.962274] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.753 [2024-04-24 22:15:57.962305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.753 qpair failed and we were unable to recover it. 00:24:15.753 [2024-04-24 22:15:57.972206] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.753 [2024-04-24 22:15:57.972345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.753 [2024-04-24 22:15:57.972372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.753 [2024-04-24 22:15:57.972388] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.753 [2024-04-24 22:15:57.972411] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.753 [2024-04-24 22:15:57.972444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.753 qpair failed and we were unable to recover it. 00:24:15.753 [2024-04-24 22:15:57.982202] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.753 [2024-04-24 22:15:57.982341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.753 [2024-04-24 22:15:57.982368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.753 [2024-04-24 22:15:57.982384] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.753 [2024-04-24 22:15:57.982405] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.753 [2024-04-24 22:15:57.982447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.753 qpair failed and we were unable to recover it. 00:24:15.753 [2024-04-24 22:15:57.992230] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.753 [2024-04-24 22:15:57.992404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.753 [2024-04-24 22:15:57.992432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.753 [2024-04-24 22:15:57.992448] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.753 [2024-04-24 22:15:57.992461] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.753 [2024-04-24 22:15:57.992493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.753 qpair failed and we were unable to recover it. 00:24:15.753 [2024-04-24 22:15:58.002328] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.753 [2024-04-24 22:15:58.002457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.753 [2024-04-24 22:15:58.002486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.753 [2024-04-24 22:15:58.002502] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.753 [2024-04-24 22:15:58.002515] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:15.753 [2024-04-24 22:15:58.002547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:15.753 qpair failed and we were unable to recover it. 00:24:16.013 [2024-04-24 22:15:58.012290] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.013 [2024-04-24 22:15:58.012434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.013 [2024-04-24 22:15:58.012467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.013 [2024-04-24 22:15:58.012484] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.013 [2024-04-24 22:15:58.012497] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.013 [2024-04-24 22:15:58.012530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.013 qpair failed and we were unable to recover it. 00:24:16.013 [2024-04-24 22:15:58.022324] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.013 [2024-04-24 22:15:58.022462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.013 [2024-04-24 22:15:58.022490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.013 [2024-04-24 22:15:58.022506] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.013 [2024-04-24 22:15:58.022519] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.013 [2024-04-24 22:15:58.022552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.013 qpair failed and we were unable to recover it. 00:24:16.013 [2024-04-24 22:15:58.032348] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.013 [2024-04-24 22:15:58.032496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.013 [2024-04-24 22:15:58.032525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.013 [2024-04-24 22:15:58.032541] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.013 [2024-04-24 22:15:58.032554] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.013 [2024-04-24 22:15:58.032587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.013 qpair failed and we were unable to recover it. 00:24:16.013 [2024-04-24 22:15:58.042376] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.013 [2024-04-24 22:15:58.042542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.013 [2024-04-24 22:15:58.042570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.013 [2024-04-24 22:15:58.042586] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.013 [2024-04-24 22:15:58.042600] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.013 [2024-04-24 22:15:58.042632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.013 qpair failed and we were unable to recover it. 00:24:16.013 [2024-04-24 22:15:58.052413] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.013 [2024-04-24 22:15:58.052561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.013 [2024-04-24 22:15:58.052588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.013 [2024-04-24 22:15:58.052604] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.013 [2024-04-24 22:15:58.052617] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.013 [2024-04-24 22:15:58.052655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.013 qpair failed and we were unable to recover it. 00:24:16.013 [2024-04-24 22:15:58.062435] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.013 [2024-04-24 22:15:58.062563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.013 [2024-04-24 22:15:58.062591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.013 [2024-04-24 22:15:58.062607] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.013 [2024-04-24 22:15:58.062620] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.013 [2024-04-24 22:15:58.062652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.013 qpair failed and we were unable to recover it. 00:24:16.013 [2024-04-24 22:15:58.072432] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.014 [2024-04-24 22:15:58.072561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.014 [2024-04-24 22:15:58.072590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.014 [2024-04-24 22:15:58.072605] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.014 [2024-04-24 22:15:58.072619] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.014 [2024-04-24 22:15:58.072650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.014 qpair failed and we were unable to recover it. 00:24:16.014 [2024-04-24 22:15:58.082429] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.014 [2024-04-24 22:15:58.082568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.014 [2024-04-24 22:15:58.082595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.014 [2024-04-24 22:15:58.082612] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.014 [2024-04-24 22:15:58.082625] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.014 [2024-04-24 22:15:58.082656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.014 qpair failed and we were unable to recover it. 00:24:16.014 [2024-04-24 22:15:58.092506] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.014 [2024-04-24 22:15:58.092649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.014 [2024-04-24 22:15:58.092676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.014 [2024-04-24 22:15:58.092692] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.014 [2024-04-24 22:15:58.092705] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.014 [2024-04-24 22:15:58.092738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.014 qpair failed and we were unable to recover it. 00:24:16.014 [2024-04-24 22:15:58.102543] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.014 [2024-04-24 22:15:58.102703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.014 [2024-04-24 22:15:58.102737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.014 [2024-04-24 22:15:58.102754] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.014 [2024-04-24 22:15:58.102767] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.014 [2024-04-24 22:15:58.102799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.014 qpair failed and we were unable to recover it. 00:24:16.014 [2024-04-24 22:15:58.112561] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.014 [2024-04-24 22:15:58.112731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.014 [2024-04-24 22:15:58.112759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.014 [2024-04-24 22:15:58.112776] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.014 [2024-04-24 22:15:58.112789] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.014 [2024-04-24 22:15:58.112821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.014 qpair failed and we were unable to recover it. 00:24:16.014 [2024-04-24 22:15:58.122585] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.014 [2024-04-24 22:15:58.122710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.014 [2024-04-24 22:15:58.122737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.014 [2024-04-24 22:15:58.122753] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.014 [2024-04-24 22:15:58.122766] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.014 [2024-04-24 22:15:58.122798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.014 qpair failed and we were unable to recover it. 00:24:16.014 [2024-04-24 22:15:58.132713] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.014 [2024-04-24 22:15:58.132906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.014 [2024-04-24 22:15:58.132933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.014 [2024-04-24 22:15:58.132950] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.014 [2024-04-24 22:15:58.132963] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.014 [2024-04-24 22:15:58.132994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.014 qpair failed and we were unable to recover it. 00:24:16.014 [2024-04-24 22:15:58.142608] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.014 [2024-04-24 22:15:58.142739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.014 [2024-04-24 22:15:58.142768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.014 [2024-04-24 22:15:58.142784] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.014 [2024-04-24 22:15:58.142803] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.014 [2024-04-24 22:15:58.142835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.014 qpair failed and we were unable to recover it. 00:24:16.014 [2024-04-24 22:15:58.152719] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.014 [2024-04-24 22:15:58.152870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.014 [2024-04-24 22:15:58.152898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.014 [2024-04-24 22:15:58.152914] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.014 [2024-04-24 22:15:58.152927] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.014 [2024-04-24 22:15:58.152959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.014 qpair failed and we were unable to recover it. 00:24:16.014 [2024-04-24 22:15:58.162647] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.014 [2024-04-24 22:15:58.162780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.014 [2024-04-24 22:15:58.162807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.014 [2024-04-24 22:15:58.162823] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.014 [2024-04-24 22:15:58.162836] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.014 [2024-04-24 22:15:58.162867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.014 qpair failed and we were unable to recover it. 00:24:16.014 [2024-04-24 22:15:58.172771] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.014 [2024-04-24 22:15:58.172920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.014 [2024-04-24 22:15:58.172947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.014 [2024-04-24 22:15:58.172963] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.014 [2024-04-24 22:15:58.172975] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.014 [2024-04-24 22:15:58.173007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.014 qpair failed and we were unable to recover it. 00:24:16.014 [2024-04-24 22:15:58.182770] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.014 [2024-04-24 22:15:58.182916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.014 [2024-04-24 22:15:58.182944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.014 [2024-04-24 22:15:58.182960] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.014 [2024-04-24 22:15:58.182974] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.014 [2024-04-24 22:15:58.183005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.014 qpair failed and we were unable to recover it. 00:24:16.014 [2024-04-24 22:15:58.192817] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.014 [2024-04-24 22:15:58.192997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.014 [2024-04-24 22:15:58.193028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.014 [2024-04-24 22:15:58.193044] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.014 [2024-04-24 22:15:58.193057] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.014 [2024-04-24 22:15:58.193089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.014 qpair failed and we were unable to recover it. 00:24:16.014 [2024-04-24 22:15:58.202827] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.015 [2024-04-24 22:15:58.202949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.015 [2024-04-24 22:15:58.202977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.015 [2024-04-24 22:15:58.202993] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.015 [2024-04-24 22:15:58.203005] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.015 [2024-04-24 22:15:58.203037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.015 qpair failed and we were unable to recover it. 00:24:16.015 [2024-04-24 22:15:58.212840] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.015 [2024-04-24 22:15:58.212974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.015 [2024-04-24 22:15:58.213001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.015 [2024-04-24 22:15:58.213017] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.015 [2024-04-24 22:15:58.213030] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.015 [2024-04-24 22:15:58.213062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.015 qpair failed and we were unable to recover it. 00:24:16.015 [2024-04-24 22:15:58.222939] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.015 [2024-04-24 22:15:58.223061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.015 [2024-04-24 22:15:58.223089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.015 [2024-04-24 22:15:58.223104] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.015 [2024-04-24 22:15:58.223125] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.015 [2024-04-24 22:15:58.223157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.015 qpair failed and we were unable to recover it. 00:24:16.015 [2024-04-24 22:15:58.232910] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.015 [2024-04-24 22:15:58.233075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.015 [2024-04-24 22:15:58.233103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.015 [2024-04-24 22:15:58.233119] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.015 [2024-04-24 22:15:58.233141] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.015 [2024-04-24 22:15:58.233174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.015 qpair failed and we were unable to recover it. 00:24:16.015 [2024-04-24 22:15:58.242935] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.015 [2024-04-24 22:15:58.243062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.015 [2024-04-24 22:15:58.243090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.015 [2024-04-24 22:15:58.243106] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.015 [2024-04-24 22:15:58.243119] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.015 [2024-04-24 22:15:58.243151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.015 qpair failed and we were unable to recover it. 00:24:16.015 [2024-04-24 22:15:58.252960] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.015 [2024-04-24 22:15:58.253092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.015 [2024-04-24 22:15:58.253120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.015 [2024-04-24 22:15:58.253135] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.015 [2024-04-24 22:15:58.253148] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.015 [2024-04-24 22:15:58.253180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.015 qpair failed and we were unable to recover it. 00:24:16.015 [2024-04-24 22:15:58.263008] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.015 [2024-04-24 22:15:58.263139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.015 [2024-04-24 22:15:58.263167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.015 [2024-04-24 22:15:58.263182] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.015 [2024-04-24 22:15:58.263195] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.015 [2024-04-24 22:15:58.263237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.015 qpair failed and we were unable to recover it. 00:24:16.274 [2024-04-24 22:15:58.273031] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.274 [2024-04-24 22:15:58.273168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.274 [2024-04-24 22:15:58.273197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.274 [2024-04-24 22:15:58.273212] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.275 [2024-04-24 22:15:58.273225] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.275 [2024-04-24 22:15:58.273264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.275 qpair failed and we were unable to recover it. 00:24:16.275 [2024-04-24 22:15:58.283038] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.275 [2024-04-24 22:15:58.283161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.275 [2024-04-24 22:15:58.283189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.275 [2024-04-24 22:15:58.283205] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.275 [2024-04-24 22:15:58.283218] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.275 [2024-04-24 22:15:58.283250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.275 qpair failed and we were unable to recover it. 00:24:16.275 [2024-04-24 22:15:58.293071] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.275 [2024-04-24 22:15:58.293202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.275 [2024-04-24 22:15:58.293229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.275 [2024-04-24 22:15:58.293245] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.275 [2024-04-24 22:15:58.293258] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.275 [2024-04-24 22:15:58.293291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.275 qpair failed and we were unable to recover it. 00:24:16.275 [2024-04-24 22:15:58.303158] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.275 [2024-04-24 22:15:58.303327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.275 [2024-04-24 22:15:58.303355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.275 [2024-04-24 22:15:58.303371] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.275 [2024-04-24 22:15:58.303384] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.275 [2024-04-24 22:15:58.303423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.275 qpair failed and we were unable to recover it. 00:24:16.275 [2024-04-24 22:15:58.313111] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.275 [2024-04-24 22:15:58.313286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.275 [2024-04-24 22:15:58.313314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.275 [2024-04-24 22:15:58.313330] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.275 [2024-04-24 22:15:58.313344] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.275 [2024-04-24 22:15:58.313377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.275 qpair failed and we were unable to recover it. 00:24:16.275 [2024-04-24 22:15:58.323130] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.275 [2024-04-24 22:15:58.323273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.275 [2024-04-24 22:15:58.323301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.275 [2024-04-24 22:15:58.323325] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.275 [2024-04-24 22:15:58.323339] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.275 [2024-04-24 22:15:58.323371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.275 qpair failed and we were unable to recover it. 00:24:16.275 [2024-04-24 22:15:58.333253] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.275 [2024-04-24 22:15:58.333386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.275 [2024-04-24 22:15:58.333421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.275 [2024-04-24 22:15:58.333437] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.275 [2024-04-24 22:15:58.333450] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.275 [2024-04-24 22:15:58.333483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.275 qpair failed and we were unable to recover it. 00:24:16.275 [2024-04-24 22:15:58.343226] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.275 [2024-04-24 22:15:58.343371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.275 [2024-04-24 22:15:58.343408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.275 [2024-04-24 22:15:58.343427] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.275 [2024-04-24 22:15:58.343440] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.275 [2024-04-24 22:15:58.343473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.275 qpair failed and we were unable to recover it. 00:24:16.275 [2024-04-24 22:15:58.353267] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.275 [2024-04-24 22:15:58.353410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.275 [2024-04-24 22:15:58.353438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.275 [2024-04-24 22:15:58.353454] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.275 [2024-04-24 22:15:58.353467] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.275 [2024-04-24 22:15:58.353499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.275 qpair failed and we were unable to recover it. 00:24:16.275 [2024-04-24 22:15:58.363250] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.275 [2024-04-24 22:15:58.363374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.275 [2024-04-24 22:15:58.363410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.275 [2024-04-24 22:15:58.363427] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.275 [2024-04-24 22:15:58.363440] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.275 [2024-04-24 22:15:58.363472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.275 qpair failed and we were unable to recover it. 00:24:16.275 [2024-04-24 22:15:58.373324] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.275 [2024-04-24 22:15:58.373465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.275 [2024-04-24 22:15:58.373493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.275 [2024-04-24 22:15:58.373509] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.275 [2024-04-24 22:15:58.373522] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.275 [2024-04-24 22:15:58.373554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.275 qpair failed and we were unable to recover it. 00:24:16.275 [2024-04-24 22:15:58.383320] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.275 [2024-04-24 22:15:58.383475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.275 [2024-04-24 22:15:58.383502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.275 [2024-04-24 22:15:58.383518] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.275 [2024-04-24 22:15:58.383531] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.275 [2024-04-24 22:15:58.383563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.275 qpair failed and we were unable to recover it. 00:24:16.275 [2024-04-24 22:15:58.393345] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.275 [2024-04-24 22:15:58.393477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.275 [2024-04-24 22:15:58.393505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.275 [2024-04-24 22:15:58.393521] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.275 [2024-04-24 22:15:58.393534] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.275 [2024-04-24 22:15:58.393567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.275 qpair failed and we were unable to recover it. 00:24:16.275 [2024-04-24 22:15:58.403380] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.275 [2024-04-24 22:15:58.403522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.275 [2024-04-24 22:15:58.403550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.275 [2024-04-24 22:15:58.403566] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.275 [2024-04-24 22:15:58.403578] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.275 [2024-04-24 22:15:58.403611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.275 qpair failed and we were unable to recover it. 00:24:16.275 [2024-04-24 22:15:58.413451] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.276 [2024-04-24 22:15:58.413584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.276 [2024-04-24 22:15:58.413617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.276 [2024-04-24 22:15:58.413634] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.276 [2024-04-24 22:15:58.413647] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.276 [2024-04-24 22:15:58.413679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.276 qpair failed and we were unable to recover it. 00:24:16.276 [2024-04-24 22:15:58.423484] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.276 [2024-04-24 22:15:58.423619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.276 [2024-04-24 22:15:58.423647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.276 [2024-04-24 22:15:58.423662] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.276 [2024-04-24 22:15:58.423675] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.276 [2024-04-24 22:15:58.423708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.276 qpair failed and we were unable to recover it. 00:24:16.276 [2024-04-24 22:15:58.433513] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.276 [2024-04-24 22:15:58.433635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.276 [2024-04-24 22:15:58.433664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.276 [2024-04-24 22:15:58.433680] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.276 [2024-04-24 22:15:58.433692] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.276 [2024-04-24 22:15:58.433724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.276 qpair failed and we were unable to recover it. 00:24:16.276 [2024-04-24 22:15:58.443517] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.276 [2024-04-24 22:15:58.443659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.276 [2024-04-24 22:15:58.443687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.276 [2024-04-24 22:15:58.443704] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.276 [2024-04-24 22:15:58.443717] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.276 [2024-04-24 22:15:58.443749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.276 qpair failed and we were unable to recover it. 00:24:16.276 [2024-04-24 22:15:58.453569] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.276 [2024-04-24 22:15:58.453701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.276 [2024-04-24 22:15:58.453728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.276 [2024-04-24 22:15:58.453744] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.276 [2024-04-24 22:15:58.453758] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.276 [2024-04-24 22:15:58.453795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.276 qpair failed and we were unable to recover it. 00:24:16.276 [2024-04-24 22:15:58.463564] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.276 [2024-04-24 22:15:58.463691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.276 [2024-04-24 22:15:58.463719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.276 [2024-04-24 22:15:58.463735] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.276 [2024-04-24 22:15:58.463748] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.276 [2024-04-24 22:15:58.463780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.276 qpair failed and we were unable to recover it. 00:24:16.276 [2024-04-24 22:15:58.473618] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.276 [2024-04-24 22:15:58.473751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.276 [2024-04-24 22:15:58.473779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.276 [2024-04-24 22:15:58.473795] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.276 [2024-04-24 22:15:58.473808] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.276 [2024-04-24 22:15:58.473840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.276 qpair failed and we were unable to recover it. 00:24:16.276 [2024-04-24 22:15:58.483604] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.276 [2024-04-24 22:15:58.483738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.276 [2024-04-24 22:15:58.483766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.276 [2024-04-24 22:15:58.483782] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.276 [2024-04-24 22:15:58.483796] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.276 [2024-04-24 22:15:58.483828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.276 qpair failed and we were unable to recover it. 00:24:16.276 [2024-04-24 22:15:58.493669] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.276 [2024-04-24 22:15:58.493826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.276 [2024-04-24 22:15:58.493854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.276 [2024-04-24 22:15:58.493869] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.276 [2024-04-24 22:15:58.493882] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.276 [2024-04-24 22:15:58.493914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.276 qpair failed and we were unable to recover it. 00:24:16.276 [2024-04-24 22:15:58.503710] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.276 [2024-04-24 22:15:58.503864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.276 [2024-04-24 22:15:58.503898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.276 [2024-04-24 22:15:58.503914] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.276 [2024-04-24 22:15:58.503927] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.276 [2024-04-24 22:15:58.503959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.276 qpair failed and we were unable to recover it. 00:24:16.276 [2024-04-24 22:15:58.513725] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.276 [2024-04-24 22:15:58.513876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.276 [2024-04-24 22:15:58.513903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.276 [2024-04-24 22:15:58.513920] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.276 [2024-04-24 22:15:58.513933] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.276 [2024-04-24 22:15:58.513964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.276 qpair failed and we were unable to recover it. 00:24:16.276 [2024-04-24 22:15:58.523840] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.276 [2024-04-24 22:15:58.523973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.276 [2024-04-24 22:15:58.524000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.276 [2024-04-24 22:15:58.524016] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.276 [2024-04-24 22:15:58.524028] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.276 [2024-04-24 22:15:58.524060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.276 qpair failed and we were unable to recover it. 00:24:16.536 [2024-04-24 22:15:58.533788] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.536 [2024-04-24 22:15:58.533918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.536 [2024-04-24 22:15:58.533946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.536 [2024-04-24 22:15:58.533962] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.536 [2024-04-24 22:15:58.533976] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.536 [2024-04-24 22:15:58.534010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.536 qpair failed and we were unable to recover it. 00:24:16.536 [2024-04-24 22:15:58.543834] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.536 [2024-04-24 22:15:58.543986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.536 [2024-04-24 22:15:58.544014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.536 [2024-04-24 22:15:58.544030] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.536 [2024-04-24 22:15:58.544043] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.536 [2024-04-24 22:15:58.544081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.536 qpair failed and we were unable to recover it. 00:24:16.536 [2024-04-24 22:15:58.553863] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.536 [2024-04-24 22:15:58.553997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.536 [2024-04-24 22:15:58.554025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.536 [2024-04-24 22:15:58.554041] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.536 [2024-04-24 22:15:58.554054] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.536 [2024-04-24 22:15:58.554087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.536 qpair failed and we were unable to recover it. 00:24:16.536 [2024-04-24 22:15:58.563801] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.536 [2024-04-24 22:15:58.563925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.536 [2024-04-24 22:15:58.563953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.536 [2024-04-24 22:15:58.563969] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.536 [2024-04-24 22:15:58.563982] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.536 [2024-04-24 22:15:58.564013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.536 qpair failed and we were unable to recover it. 00:24:16.536 [2024-04-24 22:15:58.573870] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.536 [2024-04-24 22:15:58.574022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.536 [2024-04-24 22:15:58.574050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.536 [2024-04-24 22:15:58.574066] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.536 [2024-04-24 22:15:58.574079] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.536 [2024-04-24 22:15:58.574111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.536 qpair failed and we were unable to recover it. 00:24:16.536 [2024-04-24 22:15:58.583937] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.537 [2024-04-24 22:15:58.584064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.537 [2024-04-24 22:15:58.584092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.537 [2024-04-24 22:15:58.584108] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.537 [2024-04-24 22:15:58.584121] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.537 [2024-04-24 22:15:58.584154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.537 qpair failed and we were unable to recover it. 00:24:16.537 [2024-04-24 22:15:58.593929] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.537 [2024-04-24 22:15:58.594103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.537 [2024-04-24 22:15:58.594131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.537 [2024-04-24 22:15:58.594146] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.537 [2024-04-24 22:15:58.594159] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.537 [2024-04-24 22:15:58.594191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.537 qpair failed and we were unable to recover it. 00:24:16.537 [2024-04-24 22:15:58.603938] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.537 [2024-04-24 22:15:58.604077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.537 [2024-04-24 22:15:58.604104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.537 [2024-04-24 22:15:58.604120] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.537 [2024-04-24 22:15:58.604133] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.537 [2024-04-24 22:15:58.604165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.537 qpair failed and we were unable to recover it. 00:24:16.537 [2024-04-24 22:15:58.613967] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.537 [2024-04-24 22:15:58.614103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.537 [2024-04-24 22:15:58.614131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.537 [2024-04-24 22:15:58.614146] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.537 [2024-04-24 22:15:58.614160] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.537 [2024-04-24 22:15:58.614191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.537 qpair failed and we were unable to recover it. 00:24:16.537 [2024-04-24 22:15:58.624029] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.537 [2024-04-24 22:15:58.624162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.537 [2024-04-24 22:15:58.624190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.537 [2024-04-24 22:15:58.624206] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.537 [2024-04-24 22:15:58.624219] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.537 [2024-04-24 22:15:58.624251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.537 qpair failed and we were unable to recover it. 00:24:16.537 [2024-04-24 22:15:58.634053] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.537 [2024-04-24 22:15:58.634183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.537 [2024-04-24 22:15:58.634211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.537 [2024-04-24 22:15:58.634233] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.537 [2024-04-24 22:15:58.634252] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.537 [2024-04-24 22:15:58.634285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.537 qpair failed and we were unable to recover it. 00:24:16.537 [2024-04-24 22:15:58.644052] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.537 [2024-04-24 22:15:58.644180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.537 [2024-04-24 22:15:58.644207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.537 [2024-04-24 22:15:58.644223] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.537 [2024-04-24 22:15:58.644237] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.537 [2024-04-24 22:15:58.644269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.537 qpair failed and we were unable to recover it. 00:24:16.537 [2024-04-24 22:15:58.654115] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.537 [2024-04-24 22:15:58.654263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.537 [2024-04-24 22:15:58.654289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.537 [2024-04-24 22:15:58.654304] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.537 [2024-04-24 22:15:58.654317] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.537 [2024-04-24 22:15:58.654349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.537 qpair failed and we were unable to recover it. 00:24:16.537 [2024-04-24 22:15:58.664146] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.537 [2024-04-24 22:15:58.664281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.537 [2024-04-24 22:15:58.664309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.537 [2024-04-24 22:15:58.664325] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.537 [2024-04-24 22:15:58.664339] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.537 [2024-04-24 22:15:58.664371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.537 qpair failed and we were unable to recover it. 00:24:16.537 [2024-04-24 22:15:58.674136] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.537 [2024-04-24 22:15:58.674281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.537 [2024-04-24 22:15:58.674309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.537 [2024-04-24 22:15:58.674325] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.537 [2024-04-24 22:15:58.674339] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.537 [2024-04-24 22:15:58.674370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.537 qpair failed and we were unable to recover it. 00:24:16.537 [2024-04-24 22:15:58.684168] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.537 [2024-04-24 22:15:58.684291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.537 [2024-04-24 22:15:58.684318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.537 [2024-04-24 22:15:58.684334] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.537 [2024-04-24 22:15:58.684347] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.537 [2024-04-24 22:15:58.684379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.537 qpair failed and we were unable to recover it. 00:24:16.537 [2024-04-24 22:15:58.694230] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.537 [2024-04-24 22:15:58.694381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.537 [2024-04-24 22:15:58.694416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.537 [2024-04-24 22:15:58.694432] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.537 [2024-04-24 22:15:58.694445] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.537 [2024-04-24 22:15:58.694477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.537 qpair failed and we were unable to recover it. 00:24:16.537 [2024-04-24 22:15:58.704226] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.537 [2024-04-24 22:15:58.704361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.537 [2024-04-24 22:15:58.704389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.537 [2024-04-24 22:15:58.704413] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.537 [2024-04-24 22:15:58.704427] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.537 [2024-04-24 22:15:58.704459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.537 qpair failed and we were unable to recover it. 00:24:16.537 [2024-04-24 22:15:58.714241] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.537 [2024-04-24 22:15:58.714369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.537 [2024-04-24 22:15:58.714405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.537 [2024-04-24 22:15:58.714422] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.538 [2024-04-24 22:15:58.714436] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.538 [2024-04-24 22:15:58.714468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.538 qpair failed and we were unable to recover it. 00:24:16.538 [2024-04-24 22:15:58.724291] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.538 [2024-04-24 22:15:58.724423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.538 [2024-04-24 22:15:58.724451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.538 [2024-04-24 22:15:58.724473] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.538 [2024-04-24 22:15:58.724488] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.538 [2024-04-24 22:15:58.724520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.538 qpair failed and we were unable to recover it. 00:24:16.538 [2024-04-24 22:15:58.734324] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.538 [2024-04-24 22:15:58.734496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.538 [2024-04-24 22:15:58.734524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.538 [2024-04-24 22:15:58.734540] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.538 [2024-04-24 22:15:58.734553] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.538 [2024-04-24 22:15:58.734585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.538 qpair failed and we were unable to recover it. 00:24:16.538 [2024-04-24 22:15:58.744360] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.538 [2024-04-24 22:15:58.744525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.538 [2024-04-24 22:15:58.744554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.538 [2024-04-24 22:15:58.744570] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.538 [2024-04-24 22:15:58.744583] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.538 [2024-04-24 22:15:58.744615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.538 qpair failed and we were unable to recover it. 00:24:16.538 [2024-04-24 22:15:58.754436] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.538 [2024-04-24 22:15:58.754580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.538 [2024-04-24 22:15:58.754607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.538 [2024-04-24 22:15:58.754623] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.538 [2024-04-24 22:15:58.754636] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.538 [2024-04-24 22:15:58.754668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.538 qpair failed and we were unable to recover it. 00:24:16.538 [2024-04-24 22:15:58.764383] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.538 [2024-04-24 22:15:58.764514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.538 [2024-04-24 22:15:58.764542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.538 [2024-04-24 22:15:58.764557] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.538 [2024-04-24 22:15:58.764570] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.538 [2024-04-24 22:15:58.764602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.538 qpair failed and we were unable to recover it. 00:24:16.538 [2024-04-24 22:15:58.774477] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.538 [2024-04-24 22:15:58.774611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.538 [2024-04-24 22:15:58.774639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.538 [2024-04-24 22:15:58.774655] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.538 [2024-04-24 22:15:58.774668] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.538 [2024-04-24 22:15:58.774700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.538 qpair failed and we were unable to recover it. 00:24:16.538 [2024-04-24 22:15:58.784475] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.538 [2024-04-24 22:15:58.784605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.538 [2024-04-24 22:15:58.784633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.538 [2024-04-24 22:15:58.784648] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.538 [2024-04-24 22:15:58.784662] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.538 [2024-04-24 22:15:58.784694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.538 qpair failed and we were unable to recover it. 00:24:16.797 [2024-04-24 22:15:58.794499] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.797 [2024-04-24 22:15:58.794626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.797 [2024-04-24 22:15:58.794654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.797 [2024-04-24 22:15:58.794670] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.797 [2024-04-24 22:15:58.794683] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.797 [2024-04-24 22:15:58.794714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.797 qpair failed and we were unable to recover it. 00:24:16.797 [2024-04-24 22:15:58.804510] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.797 [2024-04-24 22:15:58.804639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.797 [2024-04-24 22:15:58.804666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.797 [2024-04-24 22:15:58.804682] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.797 [2024-04-24 22:15:58.804695] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d78000b90 00:24:16.797 [2024-04-24 22:15:58.804727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.797 qpair failed and we were unable to recover it. 00:24:16.797 [2024-04-24 22:15:58.814789] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.797 [2024-04-24 22:15:58.814927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.797 [2024-04-24 22:15:58.814968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.797 [2024-04-24 22:15:58.814987] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.797 [2024-04-24 22:15:58.815002] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d70000b90 00:24:16.797 [2024-04-24 22:15:58.815036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:16.797 qpair failed and we were unable to recover it. 00:24:16.797 [2024-04-24 22:15:58.824621] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.797 [2024-04-24 22:15:58.824746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.797 [2024-04-24 22:15:58.824776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.797 [2024-04-24 22:15:58.824793] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.797 [2024-04-24 22:15:58.824807] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d70000b90 00:24:16.797 [2024-04-24 22:15:58.824839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:16.797 qpair failed and we were unable to recover it. 00:24:16.797 [2024-04-24 22:15:58.834623] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.797 [2024-04-24 22:15:58.834757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.797 [2024-04-24 22:15:58.834793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.797 [2024-04-24 22:15:58.834812] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.797 [2024-04-24 22:15:58.834825] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b0fb00 00:24:16.797 [2024-04-24 22:15:58.834858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:16.797 qpair failed and we were unable to recover it. 00:24:16.797 [2024-04-24 22:15:58.844669] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.797 [2024-04-24 22:15:58.844804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.797 [2024-04-24 22:15:58.844834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.797 [2024-04-24 22:15:58.844851] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.797 [2024-04-24 22:15:58.844864] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b0fb00 00:24:16.797 [2024-04-24 22:15:58.844895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:16.797 qpair failed and we were unable to recover it. 00:24:16.797 [2024-04-24 22:15:58.854765] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.797 [2024-04-24 22:15:58.854905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.797 [2024-04-24 22:15:58.854940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.797 [2024-04-24 22:15:58.854958] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.798 [2024-04-24 22:15:58.854972] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d68000b90 00:24:16.798 [2024-04-24 22:15:58.855006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:16.798 qpair failed and we were unable to recover it. 00:24:16.798 [2024-04-24 22:15:58.864728] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.798 [2024-04-24 22:15:58.864974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.798 [2024-04-24 22:15:58.865005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.798 [2024-04-24 22:15:58.865022] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.798 [2024-04-24 22:15:58.865036] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d68000b90 00:24:16.798 [2024-04-24 22:15:58.865069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:16.798 qpair failed and we were unable to recover it. 00:24:16.798 [2024-04-24 22:15:58.865169] nvme_ctrlr.c:4340:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:24:16.798 A controller has encountered a failure and is being reset. 00:24:16.798 [2024-04-24 22:15:58.865230] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b04690 (9): Bad file descriptor 00:24:16.798 Controller properly reset. 00:24:16.798 Initializing NVMe Controllers 00:24:16.798 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:16.798 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:16.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:16.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:16.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:16.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:16.798 Initialization complete. Launching workers. 00:24:16.798 Starting thread on core 1 00:24:16.798 Starting thread on core 2 00:24:16.798 Starting thread on core 3 00:24:16.798 Starting thread on core 0 00:24:16.798 22:15:59 -- host/target_disconnect.sh@59 -- # sync 00:24:16.798 00:24:16.798 real 0m10.931s 00:24:16.798 user 0m18.855s 00:24:16.798 sys 0m5.246s 00:24:16.798 22:15:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:16.798 22:15:59 -- common/autotest_common.sh@10 -- # set +x 00:24:16.798 ************************************ 00:24:16.798 END TEST nvmf_target_disconnect_tc2 00:24:16.798 ************************************ 00:24:16.798 22:15:59 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:24:16.798 22:15:59 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:16.798 22:15:59 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:24:16.798 22:15:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:16.798 22:15:59 -- nvmf/common.sh@117 -- # sync 00:24:16.798 22:15:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:16.798 22:15:59 -- nvmf/common.sh@120 -- # set +e 00:24:16.798 22:15:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:16.798 22:15:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:16.798 rmmod nvme_tcp 00:24:17.057 rmmod nvme_fabrics 00:24:17.057 rmmod nvme_keyring 00:24:17.057 22:15:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:17.057 22:15:59 -- nvmf/common.sh@124 -- # set -e 00:24:17.057 22:15:59 -- nvmf/common.sh@125 -- # return 0 00:24:17.057 22:15:59 -- nvmf/common.sh@478 -- # '[' -n 4039690 ']' 00:24:17.057 22:15:59 -- nvmf/common.sh@479 -- # killprocess 4039690 00:24:17.057 22:15:59 -- common/autotest_common.sh@936 -- # '[' -z 4039690 ']' 00:24:17.057 22:15:59 -- common/autotest_common.sh@940 -- # kill -0 4039690 00:24:17.057 22:15:59 -- common/autotest_common.sh@941 -- # uname 00:24:17.057 22:15:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:17.057 22:15:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4039690 00:24:17.057 22:15:59 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:24:17.057 22:15:59 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:24:17.057 22:15:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4039690' 00:24:17.057 killing process with pid 4039690 00:24:17.057 22:15:59 -- common/autotest_common.sh@955 -- # kill 4039690 00:24:17.057 [2024-04-24 22:15:59.114061] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:17.057 22:15:59 -- common/autotest_common.sh@960 -- # wait 4039690 00:24:17.316 22:15:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:17.316 22:15:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:17.316 22:15:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:17.316 22:15:59 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:17.316 22:15:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:17.316 22:15:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.316 22:15:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:17.316 22:15:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.216 22:16:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:19.216 00:24:19.216 real 0m16.426s 00:24:19.216 user 0m45.360s 00:24:19.216 sys 0m7.652s 00:24:19.216 22:16:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:19.216 22:16:01 -- common/autotest_common.sh@10 -- # set +x 00:24:19.216 ************************************ 00:24:19.216 END TEST nvmf_target_disconnect 00:24:19.216 ************************************ 00:24:19.475 22:16:01 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:24:19.475 22:16:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:19.475 22:16:01 -- common/autotest_common.sh@10 -- # set +x 00:24:19.475 22:16:01 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:24:19.475 00:24:19.475 real 16m17.043s 00:24:19.475 user 37m47.344s 00:24:19.475 sys 4m33.230s 00:24:19.475 22:16:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:19.475 22:16:01 -- common/autotest_common.sh@10 -- # set +x 00:24:19.475 ************************************ 00:24:19.475 END TEST nvmf_tcp 00:24:19.475 ************************************ 00:24:19.475 22:16:01 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:24:19.475 22:16:01 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:19.475 22:16:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:19.475 22:16:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:19.475 22:16:01 -- common/autotest_common.sh@10 -- # set +x 00:24:19.475 ************************************ 00:24:19.475 START TEST spdkcli_nvmf_tcp 00:24:19.475 ************************************ 00:24:19.475 22:16:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:19.475 * Looking for test storage... 00:24:19.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:24:19.475 22:16:01 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:24:19.475 22:16:01 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:24:19.475 22:16:01 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:24:19.475 22:16:01 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:19.475 22:16:01 -- nvmf/common.sh@7 -- # uname -s 00:24:19.475 22:16:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.475 22:16:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.475 22:16:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.475 22:16:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.475 22:16:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.475 22:16:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.475 22:16:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.475 22:16:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.475 22:16:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.475 22:16:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.475 22:16:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:19.475 22:16:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:19.475 22:16:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.475 22:16:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.475 22:16:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:19.475 22:16:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.475 22:16:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:19.475 22:16:01 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.475 22:16:01 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.475 22:16:01 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.475 22:16:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.475 22:16:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.475 22:16:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.475 22:16:01 -- paths/export.sh@5 -- # export PATH 00:24:19.475 22:16:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.475 22:16:01 -- nvmf/common.sh@47 -- # : 0 00:24:19.475 22:16:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:19.475 22:16:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:19.475 22:16:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:19.475 22:16:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.475 22:16:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.475 22:16:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:19.475 22:16:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:19.475 22:16:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:19.475 22:16:01 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:19.475 22:16:01 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:19.475 22:16:01 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:19.475 22:16:01 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:19.475 22:16:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:19.475 22:16:01 -- common/autotest_common.sh@10 -- # set +x 00:24:19.475 22:16:01 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:19.475 22:16:01 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=4040893 00:24:19.475 22:16:01 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:19.475 22:16:01 -- spdkcli/common.sh@34 -- # waitforlisten 4040893 00:24:19.475 22:16:01 -- common/autotest_common.sh@817 -- # '[' -z 4040893 ']' 00:24:19.475 22:16:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.475 22:16:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:19.475 22:16:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.475 22:16:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:19.475 22:16:01 -- common/autotest_common.sh@10 -- # set +x 00:24:19.734 [2024-04-24 22:16:01.764040] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:24:19.734 [2024-04-24 22:16:01.764131] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4040893 ] 00:24:19.734 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.734 [2024-04-24 22:16:01.832702] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:19.734 [2024-04-24 22:16:01.956427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.734 [2024-04-24 22:16:01.956432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.992 22:16:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:19.992 22:16:02 -- common/autotest_common.sh@850 -- # return 0 00:24:19.992 22:16:02 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:19.992 22:16:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:19.992 22:16:02 -- common/autotest_common.sh@10 -- # set +x 00:24:19.992 22:16:02 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:19.992 22:16:02 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:24:19.992 22:16:02 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:19.992 22:16:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:19.992 22:16:02 -- common/autotest_common.sh@10 -- # set +x 00:24:19.992 22:16:02 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:19.992 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:19.992 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:19.992 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:19.992 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:19.992 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:19.992 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:19.993 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:19.993 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:19.993 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:19.993 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:19.993 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:19.993 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:19.993 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:19.993 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:19.993 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:19.993 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:19.993 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:19.993 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:19.993 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:19.993 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:19.993 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:19.993 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:19.993 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:24:19.993 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:19.993 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:19.993 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:19.993 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:19.993 ' 00:24:20.558 [2024-04-24 22:16:02.788046] nvmf_rpc.c: 276:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:23.088 [2024-04-24 22:16:04.980721] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:24.021 [2024-04-24 22:16:06.220500] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:24.021 [2024-04-24 22:16:06.221147] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:24:26.545 [2024-04-24 22:16:08.516273] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:24:28.441 [2024-04-24 22:16:10.494416] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:24:29.817 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:24:29.817 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:24:29.817 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:24:29.817 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:24:29.817 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:24:29.817 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:24:29.817 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:24:29.817 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:29.817 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:24:29.817 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:24:29.817 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:29.817 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:29.817 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:24:29.817 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:29.817 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:29.817 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:24:29.817 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:29.817 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:24:29.817 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:29.817 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:29.817 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:24:29.817 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:24:29.817 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:24:29.817 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:24:29.817 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:29.817 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:24:29.817 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:24:29.817 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:24:30.074 22:16:12 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:24:30.074 22:16:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:30.074 22:16:12 -- common/autotest_common.sh@10 -- # set +x 00:24:30.074 22:16:12 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:24:30.074 22:16:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:30.074 22:16:12 -- common/autotest_common.sh@10 -- # set +x 00:24:30.074 22:16:12 -- spdkcli/nvmf.sh@69 -- # check_match 00:24:30.074 22:16:12 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:24:30.641 22:16:12 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:24:30.641 22:16:12 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:24:30.641 22:16:12 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:24:30.641 22:16:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:30.641 22:16:12 -- common/autotest_common.sh@10 -- # set +x 00:24:30.641 22:16:12 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:24:30.641 22:16:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:30.641 22:16:12 -- common/autotest_common.sh@10 -- # set +x 00:24:30.641 22:16:12 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:24:30.641 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:24:30.641 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:30.641 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:24:30.641 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:24:30.641 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:24:30.641 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:24:30.641 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:30.641 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:24:30.641 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:24:30.641 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:24:30.641 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:24:30.641 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:24:30.641 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:24:30.641 ' 00:24:35.904 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:24:35.904 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:24:35.904 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:35.904 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:24:35.904 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:24:35.904 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:24:35.904 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:24:35.904 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:35.904 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:24:35.904 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:24:35.904 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:24:35.904 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:24:35.904 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:24:35.904 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:24:35.904 22:16:17 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:24:35.904 22:16:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:35.904 22:16:17 -- common/autotest_common.sh@10 -- # set +x 00:24:35.904 22:16:17 -- spdkcli/nvmf.sh@90 -- # killprocess 4040893 00:24:35.904 22:16:17 -- common/autotest_common.sh@936 -- # '[' -z 4040893 ']' 00:24:35.904 22:16:17 -- common/autotest_common.sh@940 -- # kill -0 4040893 00:24:35.904 22:16:17 -- common/autotest_common.sh@941 -- # uname 00:24:35.904 22:16:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:35.904 22:16:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4040893 00:24:35.904 22:16:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:35.904 22:16:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:35.904 22:16:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4040893' 00:24:35.904 killing process with pid 4040893 00:24:35.904 22:16:18 -- common/autotest_common.sh@955 -- # kill 4040893 00:24:35.904 [2024-04-24 22:16:18.026928] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:35.904 [2024-04-24 22:16:18.026987] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:35.904 22:16:18 -- common/autotest_common.sh@960 -- # wait 4040893 00:24:36.163 22:16:18 -- spdkcli/nvmf.sh@1 -- # cleanup 00:24:36.163 22:16:18 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:24:36.163 22:16:18 -- spdkcli/common.sh@13 -- # '[' -n 4040893 ']' 00:24:36.163 22:16:18 -- spdkcli/common.sh@14 -- # killprocess 4040893 00:24:36.163 22:16:18 -- common/autotest_common.sh@936 -- # '[' -z 4040893 ']' 00:24:36.163 22:16:18 -- common/autotest_common.sh@940 -- # kill -0 4040893 00:24:36.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (4040893) - No such process 00:24:36.163 22:16:18 -- common/autotest_common.sh@963 -- # echo 'Process with pid 4040893 is not found' 00:24:36.163 Process with pid 4040893 is not found 00:24:36.163 22:16:18 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:24:36.163 22:16:18 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:24:36.163 22:16:18 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:24:36.163 00:24:36.163 real 0m16.690s 00:24:36.163 user 0m35.659s 00:24:36.163 sys 0m0.901s 00:24:36.163 22:16:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:36.163 22:16:18 -- common/autotest_common.sh@10 -- # set +x 00:24:36.163 ************************************ 00:24:36.163 END TEST spdkcli_nvmf_tcp 00:24:36.163 ************************************ 00:24:36.163 22:16:18 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:24:36.163 22:16:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:36.163 22:16:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:36.163 22:16:18 -- common/autotest_common.sh@10 -- # set +x 00:24:36.422 ************************************ 00:24:36.422 START TEST nvmf_identify_passthru 00:24:36.422 ************************************ 00:24:36.422 22:16:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:24:36.422 * Looking for test storage... 00:24:36.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:36.422 22:16:18 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.422 22:16:18 -- nvmf/common.sh@7 -- # uname -s 00:24:36.422 22:16:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.422 22:16:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.422 22:16:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.422 22:16:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.422 22:16:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.422 22:16:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.422 22:16:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.422 22:16:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.422 22:16:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.422 22:16:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.422 22:16:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:36.422 22:16:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:36.422 22:16:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.422 22:16:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.422 22:16:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:36.422 22:16:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.422 22:16:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.422 22:16:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.422 22:16:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.422 22:16:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.422 22:16:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.422 22:16:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.422 22:16:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.422 22:16:18 -- paths/export.sh@5 -- # export PATH 00:24:36.422 22:16:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.422 22:16:18 -- nvmf/common.sh@47 -- # : 0 00:24:36.422 22:16:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:36.422 22:16:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:36.422 22:16:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:36.422 22:16:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.422 22:16:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.422 22:16:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:36.422 22:16:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:36.422 22:16:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:36.422 22:16:18 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.422 22:16:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.422 22:16:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.422 22:16:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.422 22:16:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.422 22:16:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.422 22:16:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.422 22:16:18 -- paths/export.sh@5 -- # export PATH 00:24:36.422 22:16:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.422 22:16:18 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:24:36.422 22:16:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:36.422 22:16:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.422 22:16:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:36.422 22:16:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:36.422 22:16:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:36.422 22:16:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.422 22:16:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:36.422 22:16:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.422 22:16:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:36.422 22:16:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:36.422 22:16:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:36.422 22:16:18 -- common/autotest_common.sh@10 -- # set +x 00:24:39.011 22:16:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:39.011 22:16:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:39.011 22:16:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:39.011 22:16:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:39.011 22:16:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:39.011 22:16:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:39.011 22:16:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:39.011 22:16:20 -- nvmf/common.sh@295 -- # net_devs=() 00:24:39.011 22:16:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:39.011 22:16:20 -- nvmf/common.sh@296 -- # e810=() 00:24:39.011 22:16:20 -- nvmf/common.sh@296 -- # local -ga e810 00:24:39.011 22:16:20 -- nvmf/common.sh@297 -- # x722=() 00:24:39.011 22:16:20 -- nvmf/common.sh@297 -- # local -ga x722 00:24:39.011 22:16:20 -- nvmf/common.sh@298 -- # mlx=() 00:24:39.011 22:16:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:39.011 22:16:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:39.011 22:16:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:39.011 22:16:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:39.011 22:16:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:39.011 22:16:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:39.011 22:16:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:39.011 22:16:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:39.011 22:16:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:39.011 22:16:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:39.011 22:16:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:39.011 22:16:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:39.011 22:16:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:39.011 22:16:20 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:39.011 22:16:20 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:39.011 22:16:20 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:39.011 22:16:20 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:39.011 22:16:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:39.011 22:16:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:39.011 22:16:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:39.011 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:39.011 22:16:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:39.011 22:16:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:39.011 22:16:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.011 22:16:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.011 22:16:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:39.011 22:16:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:39.011 22:16:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:39.011 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:39.011 22:16:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:39.011 22:16:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:39.011 22:16:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.011 22:16:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.011 22:16:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:39.011 22:16:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:39.011 22:16:20 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:39.011 22:16:20 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:39.011 22:16:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:39.011 22:16:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.011 22:16:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:39.011 22:16:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.011 22:16:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:39.011 Found net devices under 0000:84:00.0: cvl_0_0 00:24:39.011 22:16:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.011 22:16:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:39.011 22:16:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.011 22:16:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:39.011 22:16:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.011 22:16:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:39.011 Found net devices under 0000:84:00.1: cvl_0_1 00:24:39.011 22:16:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.011 22:16:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:39.011 22:16:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:39.011 22:16:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:39.011 22:16:20 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:39.011 22:16:20 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:39.011 22:16:20 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:39.011 22:16:20 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:39.011 22:16:20 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:39.011 22:16:20 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:39.011 22:16:20 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:39.011 22:16:20 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:39.011 22:16:20 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:39.011 22:16:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:39.011 22:16:20 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:39.011 22:16:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:39.011 22:16:20 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:39.011 22:16:20 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:39.011 22:16:20 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:39.011 22:16:20 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:39.011 22:16:20 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:39.011 22:16:20 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:39.011 22:16:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:39.011 22:16:20 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:39.011 22:16:20 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:39.011 22:16:20 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:39.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:24:39.011 00:24:39.011 --- 10.0.0.2 ping statistics --- 00:24:39.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.012 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:24:39.012 22:16:20 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:39.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:24:39.012 00:24:39.012 --- 10.0.0.1 ping statistics --- 00:24:39.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.012 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:24:39.012 22:16:20 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.012 22:16:20 -- nvmf/common.sh@411 -- # return 0 00:24:39.012 22:16:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:39.012 22:16:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.012 22:16:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:39.012 22:16:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:39.012 22:16:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.012 22:16:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:39.012 22:16:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:39.012 22:16:20 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:24:39.012 22:16:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:39.012 22:16:20 -- common/autotest_common.sh@10 -- # set +x 00:24:39.012 22:16:20 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:24:39.012 22:16:20 -- common/autotest_common.sh@1510 -- # bdfs=() 00:24:39.012 22:16:20 -- common/autotest_common.sh@1510 -- # local bdfs 00:24:39.012 22:16:20 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:24:39.012 22:16:20 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:24:39.012 22:16:20 -- common/autotest_common.sh@1499 -- # bdfs=() 00:24:39.012 22:16:20 -- common/autotest_common.sh@1499 -- # local bdfs 00:24:39.012 22:16:20 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:24:39.012 22:16:20 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:39.012 22:16:20 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:24:39.012 22:16:20 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:24:39.012 22:16:20 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:82:00.0 00:24:39.012 22:16:20 -- common/autotest_common.sh@1513 -- # echo 0000:82:00.0 00:24:39.012 22:16:20 -- target/identify_passthru.sh@16 -- # bdf=0000:82:00.0 00:24:39.012 22:16:20 -- target/identify_passthru.sh@17 -- # '[' -z 0000:82:00.0 ']' 00:24:39.012 22:16:20 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:24:39.012 22:16:20 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:24:39.012 22:16:20 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:24:39.012 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.205 22:16:25 -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ9142051K1P0FGN 00:24:43.205 22:16:25 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:24:43.205 22:16:25 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:24:43.205 22:16:25 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:24:43.205 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.391 22:16:29 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:24:47.391 22:16:29 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:24:47.391 22:16:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:47.391 22:16:29 -- common/autotest_common.sh@10 -- # set +x 00:24:47.391 22:16:29 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:24:47.391 22:16:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:47.391 22:16:29 -- common/autotest_common.sh@10 -- # set +x 00:24:47.391 22:16:29 -- target/identify_passthru.sh@31 -- # nvmfpid=4045550 00:24:47.391 22:16:29 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:47.391 22:16:29 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:47.391 22:16:29 -- target/identify_passthru.sh@35 -- # waitforlisten 4045550 00:24:47.391 22:16:29 -- common/autotest_common.sh@817 -- # '[' -z 4045550 ']' 00:24:47.391 22:16:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.391 22:16:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:47.391 22:16:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.391 22:16:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:47.391 22:16:29 -- common/autotest_common.sh@10 -- # set +x 00:24:47.391 [2024-04-24 22:16:29.523013] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:24:47.391 [2024-04-24 22:16:29.523104] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.391 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.391 [2024-04-24 22:16:29.603033] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:47.649 [2024-04-24 22:16:29.724370] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.649 [2024-04-24 22:16:29.724441] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.649 [2024-04-24 22:16:29.724459] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:47.649 [2024-04-24 22:16:29.724473] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:47.649 [2024-04-24 22:16:29.724485] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.649 [2024-04-24 22:16:29.724543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.649 [2024-04-24 22:16:29.724598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.649 [2024-04-24 22:16:29.724651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:47.649 [2024-04-24 22:16:29.724654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.649 22:16:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:47.649 22:16:29 -- common/autotest_common.sh@850 -- # return 0 00:24:47.649 22:16:29 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:24:47.649 22:16:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.649 22:16:29 -- common/autotest_common.sh@10 -- # set +x 00:24:47.649 INFO: Log level set to 20 00:24:47.649 INFO: Requests: 00:24:47.649 { 00:24:47.649 "jsonrpc": "2.0", 00:24:47.649 "method": "nvmf_set_config", 00:24:47.649 "id": 1, 00:24:47.649 "params": { 00:24:47.649 "admin_cmd_passthru": { 00:24:47.649 "identify_ctrlr": true 00:24:47.649 } 00:24:47.649 } 00:24:47.649 } 00:24:47.649 00:24:47.649 INFO: response: 00:24:47.649 { 00:24:47.649 "jsonrpc": "2.0", 00:24:47.649 "id": 1, 00:24:47.649 "result": true 00:24:47.649 } 00:24:47.649 00:24:47.649 22:16:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.649 22:16:29 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:24:47.649 22:16:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.649 22:16:29 -- common/autotest_common.sh@10 -- # set +x 00:24:47.649 INFO: Setting log level to 20 00:24:47.649 INFO: Setting log level to 20 00:24:47.649 INFO: Log level set to 20 00:24:47.649 INFO: Log level set to 20 00:24:47.649 INFO: Requests: 00:24:47.649 { 00:24:47.649 "jsonrpc": "2.0", 00:24:47.649 "method": "framework_start_init", 00:24:47.649 "id": 1 00:24:47.649 } 00:24:47.649 00:24:47.649 INFO: Requests: 00:24:47.649 { 00:24:47.649 "jsonrpc": "2.0", 00:24:47.649 "method": "framework_start_init", 00:24:47.649 "id": 1 00:24:47.649 } 00:24:47.649 00:24:47.649 [2024-04-24 22:16:29.895697] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:24:47.649 INFO: response: 00:24:47.649 { 00:24:47.649 "jsonrpc": "2.0", 00:24:47.649 "id": 1, 00:24:47.649 "result": true 00:24:47.649 } 00:24:47.649 00:24:47.649 INFO: response: 00:24:47.649 { 00:24:47.649 "jsonrpc": "2.0", 00:24:47.649 "id": 1, 00:24:47.649 "result": true 00:24:47.649 } 00:24:47.649 00:24:47.649 22:16:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.649 22:16:29 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:47.649 22:16:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.649 22:16:29 -- common/autotest_common.sh@10 -- # set +x 00:24:47.649 INFO: Setting log level to 40 00:24:47.649 INFO: Setting log level to 40 00:24:47.649 INFO: Setting log level to 40 00:24:47.908 [2024-04-24 22:16:29.905755] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.908 22:16:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.908 22:16:29 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:24:47.908 22:16:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:47.908 22:16:29 -- common/autotest_common.sh@10 -- # set +x 00:24:47.908 22:16:29 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 00:24:47.908 22:16:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.908 22:16:29 -- common/autotest_common.sh@10 -- # set +x 00:24:51.191 Nvme0n1 00:24:51.191 22:16:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.191 22:16:32 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:24:51.191 22:16:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.191 22:16:32 -- common/autotest_common.sh@10 -- # set +x 00:24:51.191 22:16:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.191 22:16:32 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:51.191 22:16:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.191 22:16:32 -- common/autotest_common.sh@10 -- # set +x 00:24:51.191 22:16:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.191 22:16:32 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:51.191 22:16:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.191 22:16:32 -- common/autotest_common.sh@10 -- # set +x 00:24:51.191 [2024-04-24 22:16:32.799769] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:51.191 [2024-04-24 22:16:32.800113] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.191 22:16:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.191 22:16:32 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:24:51.191 22:16:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.191 22:16:32 -- common/autotest_common.sh@10 -- # set +x 00:24:51.191 [2024-04-24 22:16:32.807761] nvmf_rpc.c: 276:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:51.191 [ 00:24:51.191 { 00:24:51.191 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:51.191 "subtype": "Discovery", 00:24:51.191 "listen_addresses": [], 00:24:51.191 "allow_any_host": true, 00:24:51.191 "hosts": [] 00:24:51.191 }, 00:24:51.191 { 00:24:51.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:51.191 "subtype": "NVMe", 00:24:51.191 "listen_addresses": [ 00:24:51.191 { 00:24:51.191 "transport": "TCP", 00:24:51.191 "trtype": "TCP", 00:24:51.191 "adrfam": "IPv4", 00:24:51.191 "traddr": "10.0.0.2", 00:24:51.191 "trsvcid": "4420" 00:24:51.191 } 00:24:51.191 ], 00:24:51.191 "allow_any_host": true, 00:24:51.191 "hosts": [], 00:24:51.191 "serial_number": "SPDK00000000000001", 00:24:51.191 "model_number": "SPDK bdev Controller", 00:24:51.191 "max_namespaces": 1, 00:24:51.191 "min_cntlid": 1, 00:24:51.191 "max_cntlid": 65519, 00:24:51.191 "namespaces": [ 00:24:51.191 { 00:24:51.191 "nsid": 1, 00:24:51.191 "bdev_name": "Nvme0n1", 00:24:51.191 "name": "Nvme0n1", 00:24:51.191 "nguid": "89E7849712F9486C9F61E1D14173C7FA", 00:24:51.191 "uuid": "89e78497-12f9-486c-9f61-e1d14173c7fa" 00:24:51.191 } 00:24:51.191 ] 00:24:51.191 } 00:24:51.191 ] 00:24:51.191 22:16:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.191 22:16:32 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:51.191 22:16:32 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:24:51.191 22:16:32 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:24:51.191 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.191 22:16:32 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ9142051K1P0FGN 00:24:51.191 22:16:32 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:51.191 22:16:32 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:24:51.191 22:16:32 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:24:51.191 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.191 22:16:33 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:24:51.191 22:16:33 -- target/identify_passthru.sh@63 -- # '[' BTLJ9142051K1P0FGN '!=' BTLJ9142051K1P0FGN ']' 00:24:51.191 22:16:33 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:24:51.191 22:16:33 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:51.191 22:16:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.191 22:16:33 -- common/autotest_common.sh@10 -- # set +x 00:24:51.191 22:16:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.191 22:16:33 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:24:51.191 22:16:33 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:24:51.191 22:16:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:51.191 22:16:33 -- nvmf/common.sh@117 -- # sync 00:24:51.191 22:16:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:51.191 22:16:33 -- nvmf/common.sh@120 -- # set +e 00:24:51.191 22:16:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:51.191 22:16:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:51.191 rmmod nvme_tcp 00:24:51.191 rmmod nvme_fabrics 00:24:51.191 rmmod nvme_keyring 00:24:51.191 22:16:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:51.191 22:16:33 -- nvmf/common.sh@124 -- # set -e 00:24:51.191 22:16:33 -- nvmf/common.sh@125 -- # return 0 00:24:51.191 22:16:33 -- nvmf/common.sh@478 -- # '[' -n 4045550 ']' 00:24:51.191 22:16:33 -- nvmf/common.sh@479 -- # killprocess 4045550 00:24:51.191 22:16:33 -- common/autotest_common.sh@936 -- # '[' -z 4045550 ']' 00:24:51.191 22:16:33 -- common/autotest_common.sh@940 -- # kill -0 4045550 00:24:51.191 22:16:33 -- common/autotest_common.sh@941 -- # uname 00:24:51.191 22:16:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:51.191 22:16:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4045550 00:24:51.191 22:16:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:51.191 22:16:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:51.191 22:16:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4045550' 00:24:51.191 killing process with pid 4045550 00:24:51.191 22:16:33 -- common/autotest_common.sh@955 -- # kill 4045550 00:24:51.191 [2024-04-24 22:16:33.271549] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:51.191 [2024-04-24 22:16:33.271586] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:51.191 22:16:33 -- common/autotest_common.sh@960 -- # wait 4045550 00:24:53.091 22:16:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:53.091 22:16:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:53.091 22:16:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:53.091 22:16:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:53.091 22:16:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:53.091 22:16:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.091 22:16:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:53.091 22:16:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.048 22:16:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:55.048 00:24:55.048 real 0m18.515s 00:24:55.048 user 0m27.208s 00:24:55.048 sys 0m2.668s 00:24:55.048 22:16:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:55.048 22:16:36 -- common/autotest_common.sh@10 -- # set +x 00:24:55.048 ************************************ 00:24:55.048 END TEST nvmf_identify_passthru 00:24:55.048 ************************************ 00:24:55.048 22:16:37 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:24:55.048 22:16:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:55.048 22:16:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:55.048 22:16:37 -- common/autotest_common.sh@10 -- # set +x 00:24:55.048 ************************************ 00:24:55.048 START TEST nvmf_dif 00:24:55.048 ************************************ 00:24:55.048 22:16:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:24:55.048 * Looking for test storage... 00:24:55.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:55.048 22:16:37 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:55.048 22:16:37 -- nvmf/common.sh@7 -- # uname -s 00:24:55.048 22:16:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:55.048 22:16:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:55.048 22:16:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:55.048 22:16:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:55.048 22:16:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:55.048 22:16:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:55.048 22:16:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:55.048 22:16:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:55.048 22:16:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:55.048 22:16:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:55.048 22:16:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:55.048 22:16:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:55.048 22:16:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:55.048 22:16:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:55.048 22:16:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:55.048 22:16:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:55.048 22:16:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:55.048 22:16:37 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:55.048 22:16:37 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:55.048 22:16:37 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:55.048 22:16:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.048 22:16:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.048 22:16:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.048 22:16:37 -- paths/export.sh@5 -- # export PATH 00:24:55.048 22:16:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.048 22:16:37 -- nvmf/common.sh@47 -- # : 0 00:24:55.048 22:16:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:55.048 22:16:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:55.048 22:16:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:55.048 22:16:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:55.048 22:16:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:55.048 22:16:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:55.048 22:16:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:55.048 22:16:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:55.048 22:16:37 -- target/dif.sh@15 -- # NULL_META=16 00:24:55.048 22:16:37 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:24:55.048 22:16:37 -- target/dif.sh@15 -- # NULL_SIZE=64 00:24:55.048 22:16:37 -- target/dif.sh@15 -- # NULL_DIF=1 00:24:55.048 22:16:37 -- target/dif.sh@135 -- # nvmftestinit 00:24:55.048 22:16:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:55.048 22:16:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:55.048 22:16:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:55.048 22:16:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:55.048 22:16:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:55.049 22:16:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.049 22:16:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:55.049 22:16:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.049 22:16:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:55.049 22:16:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:55.049 22:16:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:55.049 22:16:37 -- common/autotest_common.sh@10 -- # set +x 00:24:57.580 22:16:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:57.580 22:16:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:57.580 22:16:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:57.580 22:16:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:57.580 22:16:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:57.580 22:16:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:57.580 22:16:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:57.580 22:16:39 -- nvmf/common.sh@295 -- # net_devs=() 00:24:57.580 22:16:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:57.580 22:16:39 -- nvmf/common.sh@296 -- # e810=() 00:24:57.580 22:16:39 -- nvmf/common.sh@296 -- # local -ga e810 00:24:57.580 22:16:39 -- nvmf/common.sh@297 -- # x722=() 00:24:57.580 22:16:39 -- nvmf/common.sh@297 -- # local -ga x722 00:24:57.580 22:16:39 -- nvmf/common.sh@298 -- # mlx=() 00:24:57.580 22:16:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:57.580 22:16:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:57.580 22:16:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:57.580 22:16:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:57.580 22:16:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:57.580 22:16:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:57.580 22:16:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:57.580 22:16:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:57.580 22:16:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:57.580 22:16:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:57.580 22:16:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:57.580 22:16:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:57.580 22:16:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:57.580 22:16:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:57.580 22:16:39 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:57.580 22:16:39 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:57.580 22:16:39 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:57.580 22:16:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:57.580 22:16:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:57.580 22:16:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:57.580 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:57.580 22:16:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:57.581 22:16:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:57.581 22:16:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.581 22:16:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.581 22:16:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:57.581 22:16:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:57.581 22:16:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:57.581 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:57.581 22:16:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:57.581 22:16:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:57.581 22:16:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.581 22:16:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.581 22:16:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:57.581 22:16:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:57.581 22:16:39 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:57.581 22:16:39 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:57.581 22:16:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:57.581 22:16:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.581 22:16:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:57.581 22:16:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.581 22:16:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:57.581 Found net devices under 0000:84:00.0: cvl_0_0 00:24:57.581 22:16:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.581 22:16:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:57.581 22:16:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.581 22:16:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:57.581 22:16:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.581 22:16:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:57.581 Found net devices under 0000:84:00.1: cvl_0_1 00:24:57.581 22:16:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.581 22:16:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:57.581 22:16:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:57.581 22:16:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:57.581 22:16:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:57.581 22:16:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:57.581 22:16:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:57.581 22:16:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:57.581 22:16:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:57.581 22:16:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:57.581 22:16:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:57.581 22:16:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:57.581 22:16:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:57.581 22:16:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:57.581 22:16:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:57.581 22:16:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:57.581 22:16:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:57.581 22:16:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:57.581 22:16:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:57.581 22:16:39 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:57.581 22:16:39 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:57.581 22:16:39 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:57.581 22:16:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:57.581 22:16:39 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:57.581 22:16:39 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:57.581 22:16:39 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:57.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:57.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:24:57.581 00:24:57.581 --- 10.0.0.2 ping statistics --- 00:24:57.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.581 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:24:57.581 22:16:39 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:57.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:57.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:24:57.581 00:24:57.581 --- 10.0.0.1 ping statistics --- 00:24:57.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.581 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:24:57.581 22:16:39 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:57.581 22:16:39 -- nvmf/common.sh@411 -- # return 0 00:24:57.581 22:16:39 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:24:57.581 22:16:39 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:58.956 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:58.956 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:58.956 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:58.956 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:58.956 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:58.956 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:58.956 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:58.956 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:58.956 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:58.956 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:58.956 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:58.956 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:58.956 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:58.956 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:58.956 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:58.956 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:58.956 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:58.956 22:16:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.956 22:16:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:58.956 22:16:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:58.956 22:16:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.956 22:16:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:58.956 22:16:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:58.956 22:16:40 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:24:58.956 22:16:40 -- target/dif.sh@137 -- # nvmfappstart 00:24:58.956 22:16:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:58.956 22:16:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:58.956 22:16:40 -- common/autotest_common.sh@10 -- # set +x 00:24:58.956 22:16:40 -- nvmf/common.sh@470 -- # nvmfpid=4048841 00:24:58.956 22:16:40 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:58.956 22:16:40 -- nvmf/common.sh@471 -- # waitforlisten 4048841 00:24:58.956 22:16:40 -- common/autotest_common.sh@817 -- # '[' -z 4048841 ']' 00:24:58.956 22:16:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.956 22:16:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:58.956 22:16:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.956 22:16:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:58.956 22:16:40 -- common/autotest_common.sh@10 -- # set +x 00:24:58.956 [2024-04-24 22:16:41.041355] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:24:58.956 [2024-04-24 22:16:41.041476] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.956 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.956 [2024-04-24 22:16:41.122897] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.214 [2024-04-24 22:16:41.241511] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:59.214 [2024-04-24 22:16:41.241570] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:59.214 [2024-04-24 22:16:41.241588] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:59.214 [2024-04-24 22:16:41.241601] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:59.214 [2024-04-24 22:16:41.241613] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:59.214 [2024-04-24 22:16:41.241654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.214 22:16:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:59.214 22:16:41 -- common/autotest_common.sh@850 -- # return 0 00:24:59.214 22:16:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:59.214 22:16:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:59.215 22:16:41 -- common/autotest_common.sh@10 -- # set +x 00:24:59.215 22:16:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.215 22:16:41 -- target/dif.sh@139 -- # create_transport 00:24:59.215 22:16:41 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:24:59.215 22:16:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.215 22:16:41 -- common/autotest_common.sh@10 -- # set +x 00:24:59.215 [2024-04-24 22:16:41.394960] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.215 22:16:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.215 22:16:41 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:24:59.215 22:16:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:59.215 22:16:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:59.215 22:16:41 -- common/autotest_common.sh@10 -- # set +x 00:24:59.473 ************************************ 00:24:59.473 START TEST fio_dif_1_default 00:24:59.473 ************************************ 00:24:59.473 22:16:41 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:24:59.473 22:16:41 -- target/dif.sh@86 -- # create_subsystems 0 00:24:59.473 22:16:41 -- target/dif.sh@28 -- # local sub 00:24:59.473 22:16:41 -- target/dif.sh@30 -- # for sub in "$@" 00:24:59.473 22:16:41 -- target/dif.sh@31 -- # create_subsystem 0 00:24:59.473 22:16:41 -- target/dif.sh@18 -- # local sub_id=0 00:24:59.473 22:16:41 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:59.473 22:16:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.473 22:16:41 -- common/autotest_common.sh@10 -- # set +x 00:24:59.473 bdev_null0 00:24:59.473 22:16:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.473 22:16:41 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:59.473 22:16:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.473 22:16:41 -- common/autotest_common.sh@10 -- # set +x 00:24:59.474 22:16:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.474 22:16:41 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:59.474 22:16:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.474 22:16:41 -- common/autotest_common.sh@10 -- # set +x 00:24:59.474 22:16:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.474 22:16:41 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:59.474 22:16:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.474 22:16:41 -- common/autotest_common.sh@10 -- # set +x 00:24:59.474 [2024-04-24 22:16:41.539246] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:59.474 [2024-04-24 22:16:41.539550] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.474 22:16:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.474 22:16:41 -- target/dif.sh@87 -- # fio /dev/fd/62 00:24:59.474 22:16:41 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:24:59.474 22:16:41 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:59.474 22:16:41 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:59.474 22:16:41 -- nvmf/common.sh@521 -- # config=() 00:24:59.474 22:16:41 -- nvmf/common.sh@521 -- # local subsystem config 00:24:59.474 22:16:41 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:59.474 22:16:41 -- target/dif.sh@82 -- # gen_fio_conf 00:24:59.474 22:16:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:59.474 22:16:41 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:24:59.474 22:16:41 -- target/dif.sh@54 -- # local file 00:24:59.474 22:16:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:59.474 { 00:24:59.474 "params": { 00:24:59.474 "name": "Nvme$subsystem", 00:24:59.474 "trtype": "$TEST_TRANSPORT", 00:24:59.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:59.474 "adrfam": "ipv4", 00:24:59.474 "trsvcid": "$NVMF_PORT", 00:24:59.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:59.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:59.474 "hdgst": ${hdgst:-false}, 00:24:59.474 "ddgst": ${ddgst:-false} 00:24:59.474 }, 00:24:59.474 "method": "bdev_nvme_attach_controller" 00:24:59.474 } 00:24:59.474 EOF 00:24:59.474 )") 00:24:59.474 22:16:41 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:59.474 22:16:41 -- target/dif.sh@56 -- # cat 00:24:59.474 22:16:41 -- common/autotest_common.sh@1325 -- # local sanitizers 00:24:59.474 22:16:41 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:59.474 22:16:41 -- common/autotest_common.sh@1327 -- # shift 00:24:59.474 22:16:41 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:24:59.474 22:16:41 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:24:59.474 22:16:41 -- nvmf/common.sh@543 -- # cat 00:24:59.474 22:16:41 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:59.474 22:16:41 -- target/dif.sh@72 -- # (( file = 1 )) 00:24:59.474 22:16:41 -- target/dif.sh@72 -- # (( file <= files )) 00:24:59.474 22:16:41 -- common/autotest_common.sh@1331 -- # grep libasan 00:24:59.474 22:16:41 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:24:59.474 22:16:41 -- nvmf/common.sh@545 -- # jq . 00:24:59.474 22:16:41 -- nvmf/common.sh@546 -- # IFS=, 00:24:59.474 22:16:41 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:24:59.474 "params": { 00:24:59.474 "name": "Nvme0", 00:24:59.474 "trtype": "tcp", 00:24:59.474 "traddr": "10.0.0.2", 00:24:59.474 "adrfam": "ipv4", 00:24:59.474 "trsvcid": "4420", 00:24:59.474 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:59.474 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:59.474 "hdgst": false, 00:24:59.474 "ddgst": false 00:24:59.474 }, 00:24:59.474 "method": "bdev_nvme_attach_controller" 00:24:59.474 }' 00:24:59.474 22:16:41 -- common/autotest_common.sh@1331 -- # asan_lib= 00:24:59.474 22:16:41 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:24:59.474 22:16:41 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:24:59.474 22:16:41 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:59.474 22:16:41 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:24:59.474 22:16:41 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:24:59.474 22:16:41 -- common/autotest_common.sh@1331 -- # asan_lib= 00:24:59.474 22:16:41 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:24:59.474 22:16:41 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:24:59.474 22:16:41 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:59.732 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:59.732 fio-3.35 00:24:59.732 Starting 1 thread 00:24:59.732 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.959 00:25:11.959 filename0: (groupid=0, jobs=1): err= 0: pid=4049077: Wed Apr 24 22:16:52 2024 00:25:11.959 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10027msec) 00:25:11.959 slat (nsec): min=5303, max=82838, avg=11012.65, stdev=3726.69 00:25:11.959 clat (usec): min=40926, max=45227, avg=41568.53, stdev=542.20 00:25:11.959 lat (usec): min=40935, max=45258, avg=41579.55, stdev=542.41 00:25:11.959 clat percentiles (usec): 00:25:11.959 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:25:11.959 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:25:11.959 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:25:11.959 | 99.00th=[42206], 99.50th=[42730], 99.90th=[45351], 99.95th=[45351], 00:25:11.959 | 99.99th=[45351] 00:25:11.959 bw ( KiB/s): min= 352, max= 416, per=99.85%, avg=384.00, stdev=14.68, samples=20 00:25:11.959 iops : min= 88, max= 104, avg=96.00, stdev= 3.67, samples=20 00:25:11.959 lat (msec) : 50=100.00% 00:25:11.959 cpu : usr=89.87%, sys=9.78%, ctx=16, majf=0, minf=273 00:25:11.959 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:11.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:11.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:11.959 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:11.959 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:11.959 00:25:11.959 Run status group 0 (all jobs): 00:25:11.959 READ: bw=385KiB/s (394kB/s), 385KiB/s-385KiB/s (394kB/s-394kB/s), io=3856KiB (3949kB), run=10027-10027msec 00:25:11.959 22:16:52 -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:11.959 22:16:52 -- target/dif.sh@43 -- # local sub 00:25:11.959 22:16:52 -- target/dif.sh@45 -- # for sub in "$@" 00:25:11.959 22:16:52 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:11.959 22:16:52 -- target/dif.sh@36 -- # local sub_id=0 00:25:11.959 22:16:52 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:11.959 22:16:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.959 22:16:52 -- common/autotest_common.sh@10 -- # set +x 00:25:11.959 22:16:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.959 22:16:52 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:11.959 22:16:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.959 22:16:52 -- common/autotest_common.sh@10 -- # set +x 00:25:11.959 22:16:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.959 00:25:11.959 real 0m11.159s 00:25:11.959 user 0m10.154s 00:25:11.959 sys 0m1.231s 00:25:11.959 22:16:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:11.959 22:16:52 -- common/autotest_common.sh@10 -- # set +x 00:25:11.959 ************************************ 00:25:11.959 END TEST fio_dif_1_default 00:25:11.960 ************************************ 00:25:11.960 22:16:52 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:11.960 22:16:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:11.960 22:16:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:11.960 22:16:52 -- common/autotest_common.sh@10 -- # set +x 00:25:11.960 ************************************ 00:25:11.960 START TEST fio_dif_1_multi_subsystems 00:25:11.960 ************************************ 00:25:11.960 22:16:52 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:25:11.960 22:16:52 -- target/dif.sh@92 -- # local files=1 00:25:11.960 22:16:52 -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:11.960 22:16:52 -- target/dif.sh@28 -- # local sub 00:25:11.960 22:16:52 -- target/dif.sh@30 -- # for sub in "$@" 00:25:11.960 22:16:52 -- target/dif.sh@31 -- # create_subsystem 0 00:25:11.960 22:16:52 -- target/dif.sh@18 -- # local sub_id=0 00:25:11.960 22:16:52 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:11.960 22:16:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.960 22:16:52 -- common/autotest_common.sh@10 -- # set +x 00:25:11.960 bdev_null0 00:25:11.960 22:16:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.960 22:16:52 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:11.960 22:16:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.960 22:16:52 -- common/autotest_common.sh@10 -- # set +x 00:25:11.960 22:16:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.960 22:16:52 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:11.960 22:16:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.960 22:16:52 -- common/autotest_common.sh@10 -- # set +x 00:25:11.960 22:16:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.960 22:16:52 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:11.960 22:16:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.960 22:16:52 -- common/autotest_common.sh@10 -- # set +x 00:25:11.960 [2024-04-24 22:16:52.848221] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:11.960 22:16:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.960 22:16:52 -- target/dif.sh@30 -- # for sub in "$@" 00:25:11.960 22:16:52 -- target/dif.sh@31 -- # create_subsystem 1 00:25:11.960 22:16:52 -- target/dif.sh@18 -- # local sub_id=1 00:25:11.960 22:16:52 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:11.960 22:16:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.960 22:16:52 -- common/autotest_common.sh@10 -- # set +x 00:25:11.960 bdev_null1 00:25:11.960 22:16:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.960 22:16:52 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:11.960 22:16:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.960 22:16:52 -- common/autotest_common.sh@10 -- # set +x 00:25:11.960 22:16:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.960 22:16:52 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:11.960 22:16:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.960 22:16:52 -- common/autotest_common.sh@10 -- # set +x 00:25:11.960 22:16:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.960 22:16:52 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:11.960 22:16:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.960 22:16:52 -- common/autotest_common.sh@10 -- # set +x 00:25:11.960 22:16:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.960 22:16:52 -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:11.960 22:16:52 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:11.960 22:16:52 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:11.960 22:16:52 -- nvmf/common.sh@521 -- # config=() 00:25:11.960 22:16:52 -- nvmf/common.sh@521 -- # local subsystem config 00:25:11.960 22:16:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:11.960 22:16:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:11.960 { 00:25:11.960 "params": { 00:25:11.960 "name": "Nvme$subsystem", 00:25:11.960 "trtype": "$TEST_TRANSPORT", 00:25:11.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.960 "adrfam": "ipv4", 00:25:11.960 "trsvcid": "$NVMF_PORT", 00:25:11.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.960 "hdgst": ${hdgst:-false}, 00:25:11.960 "ddgst": ${ddgst:-false} 00:25:11.960 }, 00:25:11.960 "method": "bdev_nvme_attach_controller" 00:25:11.960 } 00:25:11.960 EOF 00:25:11.960 )") 00:25:11.960 22:16:52 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:11.960 22:16:52 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:11.960 22:16:52 -- target/dif.sh@82 -- # gen_fio_conf 00:25:11.960 22:16:52 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:25:11.960 22:16:52 -- target/dif.sh@54 -- # local file 00:25:11.960 22:16:52 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:11.960 22:16:52 -- target/dif.sh@56 -- # cat 00:25:11.960 22:16:52 -- common/autotest_common.sh@1325 -- # local sanitizers 00:25:11.960 22:16:52 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:11.960 22:16:52 -- common/autotest_common.sh@1327 -- # shift 00:25:11.960 22:16:52 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:25:11.960 22:16:52 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:11.960 22:16:52 -- nvmf/common.sh@543 -- # cat 00:25:11.960 22:16:52 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:11.960 22:16:52 -- common/autotest_common.sh@1331 -- # grep libasan 00:25:11.960 22:16:52 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:11.960 22:16:52 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:11.960 22:16:52 -- target/dif.sh@72 -- # (( file <= files )) 00:25:11.960 22:16:52 -- target/dif.sh@73 -- # cat 00:25:11.960 22:16:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:11.960 22:16:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:11.960 { 00:25:11.960 "params": { 00:25:11.960 "name": "Nvme$subsystem", 00:25:11.960 "trtype": "$TEST_TRANSPORT", 00:25:11.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.960 "adrfam": "ipv4", 00:25:11.960 "trsvcid": "$NVMF_PORT", 00:25:11.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.960 "hdgst": ${hdgst:-false}, 00:25:11.960 "ddgst": ${ddgst:-false} 00:25:11.960 }, 00:25:11.960 "method": "bdev_nvme_attach_controller" 00:25:11.960 } 00:25:11.960 EOF 00:25:11.960 )") 00:25:11.960 22:16:52 -- target/dif.sh@72 -- # (( file++ )) 00:25:11.960 22:16:52 -- target/dif.sh@72 -- # (( file <= files )) 00:25:11.960 22:16:52 -- nvmf/common.sh@543 -- # cat 00:25:11.960 22:16:52 -- nvmf/common.sh@545 -- # jq . 00:25:11.960 22:16:52 -- nvmf/common.sh@546 -- # IFS=, 00:25:11.960 22:16:52 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:11.960 "params": { 00:25:11.960 "name": "Nvme0", 00:25:11.960 "trtype": "tcp", 00:25:11.960 "traddr": "10.0.0.2", 00:25:11.960 "adrfam": "ipv4", 00:25:11.960 "trsvcid": "4420", 00:25:11.960 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:11.960 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:11.960 "hdgst": false, 00:25:11.960 "ddgst": false 00:25:11.960 }, 00:25:11.960 "method": "bdev_nvme_attach_controller" 00:25:11.960 },{ 00:25:11.960 "params": { 00:25:11.960 "name": "Nvme1", 00:25:11.960 "trtype": "tcp", 00:25:11.960 "traddr": "10.0.0.2", 00:25:11.960 "adrfam": "ipv4", 00:25:11.960 "trsvcid": "4420", 00:25:11.960 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:11.960 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:11.960 "hdgst": false, 00:25:11.960 "ddgst": false 00:25:11.960 }, 00:25:11.960 "method": "bdev_nvme_attach_controller" 00:25:11.960 }' 00:25:11.960 22:16:52 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:11.960 22:16:52 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:11.960 22:16:52 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:11.960 22:16:52 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:11.960 22:16:52 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:25:11.960 22:16:52 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:11.960 22:16:52 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:11.960 22:16:52 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:11.960 22:16:52 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:11.960 22:16:52 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:11.960 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:11.960 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:11.960 fio-3.35 00:25:11.960 Starting 2 threads 00:25:11.960 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.926 00:25:21.926 filename0: (groupid=0, jobs=1): err= 0: pid=4050493: Wed Apr 24 22:17:03 2024 00:25:21.926 read: IOPS=141, BW=567KiB/s (580kB/s)(5680KiB/10023msec) 00:25:21.926 slat (nsec): min=4347, max=86320, avg=10579.96, stdev=3344.96 00:25:21.926 clat (usec): min=741, max=43992, avg=28198.13, stdev=18912.39 00:25:21.926 lat (usec): min=750, max=44017, avg=28208.71, stdev=18912.27 00:25:21.926 clat percentiles (usec): 00:25:21.926 | 1.00th=[ 775], 5.00th=[ 791], 10.00th=[ 807], 20.00th=[ 881], 00:25:21.926 | 30.00th=[ 1074], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:25:21.926 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:25:21.926 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:25:21.926 | 99.99th=[43779] 00:25:21.926 bw ( KiB/s): min= 384, max= 768, per=59.39%, avg=566.40, stdev=181.05, samples=20 00:25:21.926 iops : min= 96, max= 192, avg=141.60, stdev=45.26, samples=20 00:25:21.926 lat (usec) : 750=0.07%, 1000=25.42% 00:25:21.926 lat (msec) : 2=6.90%, 50=67.61% 00:25:21.926 cpu : usr=94.19%, sys=5.48%, ctx=14, majf=0, minf=188 00:25:21.926 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:21.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.926 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.926 issued rwts: total=1420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:21.926 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:21.926 filename1: (groupid=0, jobs=1): err= 0: pid=4050494: Wed Apr 24 22:17:03 2024 00:25:21.926 read: IOPS=96, BW=387KiB/s (396kB/s)(3872KiB/10015msec) 00:25:21.926 slat (nsec): min=4851, max=34643, avg=10580.63, stdev=2680.84 00:25:21.926 clat (usec): min=40866, max=43932, avg=41347.34, stdev=501.82 00:25:21.926 lat (usec): min=40875, max=43945, avg=41357.92, stdev=501.88 00:25:21.926 clat percentiles (usec): 00:25:21.926 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:25:21.926 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:25:21.926 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:25:21.926 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:25:21.926 | 99.99th=[43779] 00:25:21.926 bw ( KiB/s): min= 352, max= 416, per=40.40%, avg=385.60, stdev=12.61, samples=20 00:25:21.926 iops : min= 88, max= 104, avg=96.40, stdev= 3.15, samples=20 00:25:21.926 lat (msec) : 50=100.00% 00:25:21.926 cpu : usr=95.02%, sys=4.67%, ctx=16, majf=0, minf=114 00:25:21.926 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:21.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.926 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.926 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:21.927 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:21.927 00:25:21.927 Run status group 0 (all jobs): 00:25:21.927 READ: bw=953KiB/s (976kB/s), 387KiB/s-567KiB/s (396kB/s-580kB/s), io=9552KiB (9781kB), run=10015-10023msec 00:25:22.185 22:17:04 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:22.185 22:17:04 -- target/dif.sh@43 -- # local sub 00:25:22.185 22:17:04 -- target/dif.sh@45 -- # for sub in "$@" 00:25:22.185 22:17:04 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:22.185 22:17:04 -- target/dif.sh@36 -- # local sub_id=0 00:25:22.185 22:17:04 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:22.185 22:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.185 22:17:04 -- common/autotest_common.sh@10 -- # set +x 00:25:22.185 22:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.185 22:17:04 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:22.185 22:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.185 22:17:04 -- common/autotest_common.sh@10 -- # set +x 00:25:22.185 22:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.185 22:17:04 -- target/dif.sh@45 -- # for sub in "$@" 00:25:22.185 22:17:04 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:22.185 22:17:04 -- target/dif.sh@36 -- # local sub_id=1 00:25:22.185 22:17:04 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:22.185 22:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.185 22:17:04 -- common/autotest_common.sh@10 -- # set +x 00:25:22.185 22:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.185 22:17:04 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:22.185 22:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.185 22:17:04 -- common/autotest_common.sh@10 -- # set +x 00:25:22.185 22:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.185 00:25:22.185 real 0m11.431s 00:25:22.185 user 0m20.338s 00:25:22.185 sys 0m1.370s 00:25:22.185 22:17:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:22.185 22:17:04 -- common/autotest_common.sh@10 -- # set +x 00:25:22.185 ************************************ 00:25:22.185 END TEST fio_dif_1_multi_subsystems 00:25:22.185 ************************************ 00:25:22.185 22:17:04 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:22.185 22:17:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:22.185 22:17:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:22.185 22:17:04 -- common/autotest_common.sh@10 -- # set +x 00:25:22.185 ************************************ 00:25:22.185 START TEST fio_dif_rand_params 00:25:22.185 ************************************ 00:25:22.185 22:17:04 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:25:22.185 22:17:04 -- target/dif.sh@100 -- # local NULL_DIF 00:25:22.185 22:17:04 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:22.185 22:17:04 -- target/dif.sh@103 -- # NULL_DIF=3 00:25:22.185 22:17:04 -- target/dif.sh@103 -- # bs=128k 00:25:22.185 22:17:04 -- target/dif.sh@103 -- # numjobs=3 00:25:22.185 22:17:04 -- target/dif.sh@103 -- # iodepth=3 00:25:22.185 22:17:04 -- target/dif.sh@103 -- # runtime=5 00:25:22.185 22:17:04 -- target/dif.sh@105 -- # create_subsystems 0 00:25:22.185 22:17:04 -- target/dif.sh@28 -- # local sub 00:25:22.185 22:17:04 -- target/dif.sh@30 -- # for sub in "$@" 00:25:22.185 22:17:04 -- target/dif.sh@31 -- # create_subsystem 0 00:25:22.185 22:17:04 -- target/dif.sh@18 -- # local sub_id=0 00:25:22.185 22:17:04 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:22.185 22:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.185 22:17:04 -- common/autotest_common.sh@10 -- # set +x 00:25:22.185 bdev_null0 00:25:22.185 22:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.185 22:17:04 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:22.185 22:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.185 22:17:04 -- common/autotest_common.sh@10 -- # set +x 00:25:22.185 22:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.185 22:17:04 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:22.185 22:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.185 22:17:04 -- common/autotest_common.sh@10 -- # set +x 00:25:22.185 22:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.185 22:17:04 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:22.185 22:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.185 22:17:04 -- common/autotest_common.sh@10 -- # set +x 00:25:22.185 [2024-04-24 22:17:04.430451] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.185 22:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.185 22:17:04 -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:22.185 22:17:04 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:22.185 22:17:04 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:22.185 22:17:04 -- nvmf/common.sh@521 -- # config=() 00:25:22.185 22:17:04 -- nvmf/common.sh@521 -- # local subsystem config 00:25:22.185 22:17:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:22.185 22:17:04 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:22.185 22:17:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:22.185 { 00:25:22.185 "params": { 00:25:22.185 "name": "Nvme$subsystem", 00:25:22.185 "trtype": "$TEST_TRANSPORT", 00:25:22.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:22.185 "adrfam": "ipv4", 00:25:22.185 "trsvcid": "$NVMF_PORT", 00:25:22.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:22.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:22.185 "hdgst": ${hdgst:-false}, 00:25:22.185 "ddgst": ${ddgst:-false} 00:25:22.185 }, 00:25:22.185 "method": "bdev_nvme_attach_controller" 00:25:22.185 } 00:25:22.185 EOF 00:25:22.185 )") 00:25:22.185 22:17:04 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:22.185 22:17:04 -- target/dif.sh@82 -- # gen_fio_conf 00:25:22.185 22:17:04 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:25:22.185 22:17:04 -- target/dif.sh@54 -- # local file 00:25:22.185 22:17:04 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:22.185 22:17:04 -- target/dif.sh@56 -- # cat 00:25:22.185 22:17:04 -- common/autotest_common.sh@1325 -- # local sanitizers 00:25:22.185 22:17:04 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:22.185 22:17:04 -- common/autotest_common.sh@1327 -- # shift 00:25:22.185 22:17:04 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:25:22.185 22:17:04 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:22.185 22:17:04 -- nvmf/common.sh@543 -- # cat 00:25:22.185 22:17:04 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:22.185 22:17:04 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:22.185 22:17:04 -- common/autotest_common.sh@1331 -- # grep libasan 00:25:22.185 22:17:04 -- target/dif.sh@72 -- # (( file <= files )) 00:25:22.185 22:17:04 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:22.185 22:17:04 -- nvmf/common.sh@545 -- # jq . 00:25:22.444 22:17:04 -- nvmf/common.sh@546 -- # IFS=, 00:25:22.444 22:17:04 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:22.444 "params": { 00:25:22.444 "name": "Nvme0", 00:25:22.444 "trtype": "tcp", 00:25:22.444 "traddr": "10.0.0.2", 00:25:22.444 "adrfam": "ipv4", 00:25:22.444 "trsvcid": "4420", 00:25:22.444 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:22.444 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:22.444 "hdgst": false, 00:25:22.444 "ddgst": false 00:25:22.444 }, 00:25:22.444 "method": "bdev_nvme_attach_controller" 00:25:22.444 }' 00:25:22.444 22:17:04 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:22.444 22:17:04 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:22.444 22:17:04 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:22.444 22:17:04 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:22.444 22:17:04 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:25:22.444 22:17:04 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:22.444 22:17:04 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:22.444 22:17:04 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:22.444 22:17:04 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:22.444 22:17:04 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:22.444 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:22.444 ... 00:25:22.444 fio-3.35 00:25:22.444 Starting 3 threads 00:25:22.703 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.264 00:25:29.264 filename0: (groupid=0, jobs=1): err= 0: pid=4051903: Wed Apr 24 22:17:10 2024 00:25:29.264 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(125MiB/5048msec) 00:25:29.264 slat (nsec): min=5767, max=35481, avg=16430.26, stdev=3715.33 00:25:29.264 clat (usec): min=5258, max=93947, avg=15031.62, stdev=12868.22 00:25:29.264 lat (usec): min=5271, max=93965, avg=15048.05, stdev=12868.52 00:25:29.264 clat percentiles (usec): 00:25:29.264 | 1.00th=[ 5932], 5.00th=[ 6259], 10.00th=[ 7242], 20.00th=[ 8979], 00:25:29.264 | 30.00th=[ 9634], 40.00th=[10552], 50.00th=[11600], 60.00th=[12518], 00:25:29.264 | 70.00th=[13435], 80.00th=[14353], 90.00th=[17433], 95.00th=[52691], 00:25:29.264 | 99.00th=[55837], 99.50th=[57410], 99.90th=[90702], 99.95th=[93848], 00:25:29.264 | 99.99th=[93848] 00:25:29.264 bw ( KiB/s): min=18176, max=33280, per=34.56%, avg=25600.00, stdev=5217.21, samples=10 00:25:29.264 iops : min= 142, max= 260, avg=200.00, stdev=40.76, samples=10 00:25:29.264 lat (msec) : 10=34.80%, 20=56.03%, 50=1.60%, 100=7.58% 00:25:29.264 cpu : usr=93.40%, sys=6.02%, ctx=14, majf=0, minf=76 00:25:29.264 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:29.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.264 issued rwts: total=1003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:29.264 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:29.264 filename0: (groupid=0, jobs=1): err= 0: pid=4051904: Wed Apr 24 22:17:10 2024 00:25:29.264 read: IOPS=191, BW=23.9MiB/s (25.1MB/s)(121MiB/5049msec) 00:25:29.264 slat (nsec): min=5136, max=54456, avg=15966.28, stdev=4024.89 00:25:29.264 clat (usec): min=5705, max=93221, avg=15615.03, stdev=12590.60 00:25:29.264 lat (usec): min=5719, max=93236, avg=15631.00, stdev=12590.67 00:25:29.264 clat percentiles (usec): 00:25:29.264 | 1.00th=[ 6063], 5.00th=[ 6849], 10.00th=[ 8356], 20.00th=[ 9241], 00:25:29.264 | 30.00th=[ 9765], 40.00th=[10814], 50.00th=[11994], 60.00th=[12911], 00:25:29.264 | 70.00th=[13829], 80.00th=[15270], 90.00th=[46924], 95.00th=[51643], 00:25:29.264 | 99.00th=[55313], 99.50th=[56361], 99.90th=[92799], 99.95th=[92799], 00:25:29.265 | 99.99th=[92799] 00:25:29.265 bw ( KiB/s): min=19712, max=34816, per=33.28%, avg=24652.80, stdev=4484.75, samples=10 00:25:29.265 iops : min= 154, max= 272, avg=192.60, stdev=35.04, samples=10 00:25:29.265 lat (msec) : 10=33.64%, 20=56.31%, 50=2.69%, 100=7.35% 00:25:29.265 cpu : usr=93.07%, sys=6.10%, ctx=70, majf=0, minf=68 00:25:29.265 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:29.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.265 issued rwts: total=966,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:29.265 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:29.265 filename0: (groupid=0, jobs=1): err= 0: pid=4051905: Wed Apr 24 22:17:10 2024 00:25:29.265 read: IOPS=190, BW=23.8MiB/s (25.0MB/s)(119MiB/5006msec) 00:25:29.265 slat (nsec): min=5821, max=49696, avg=14529.63, stdev=3586.72 00:25:29.265 clat (usec): min=5993, max=93109, avg=15738.07, stdev=12740.99 00:25:29.265 lat (usec): min=6007, max=93118, avg=15752.60, stdev=12740.91 00:25:29.265 clat percentiles (usec): 00:25:29.265 | 1.00th=[ 6325], 5.00th=[ 6718], 10.00th=[ 8586], 20.00th=[ 9503], 00:25:29.265 | 30.00th=[10159], 40.00th=[10945], 50.00th=[11863], 60.00th=[13304], 00:25:29.265 | 70.00th=[14484], 80.00th=[15926], 90.00th=[18220], 95.00th=[52691], 00:25:29.265 | 99.00th=[56361], 99.50th=[57410], 99.90th=[92799], 99.95th=[92799], 00:25:29.265 | 99.99th=[92799] 00:25:29.265 bw ( KiB/s): min=17920, max=32256, per=32.84%, avg=24325.10, stdev=5082.91, samples=10 00:25:29.265 iops : min= 140, max= 252, avg=190.00, stdev=39.70, samples=10 00:25:29.265 lat (msec) : 10=26.34%, 20=64.43%, 50=0.84%, 100=8.39% 00:25:29.265 cpu : usr=93.31%, sys=6.25%, ctx=7, majf=0, minf=138 00:25:29.265 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:29.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.265 issued rwts: total=953,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:29.265 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:29.265 00:25:29.265 Run status group 0 (all jobs): 00:25:29.265 READ: bw=72.3MiB/s (75.9MB/s), 23.8MiB/s-24.8MiB/s (25.0MB/s-26.0MB/s), io=365MiB (383MB), run=5006-5049msec 00:25:29.265 22:17:10 -- target/dif.sh@107 -- # destroy_subsystems 0 00:25:29.265 22:17:10 -- target/dif.sh@43 -- # local sub 00:25:29.265 22:17:10 -- target/dif.sh@45 -- # for sub in "$@" 00:25:29.265 22:17:10 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:29.265 22:17:10 -- target/dif.sh@36 -- # local sub_id=0 00:25:29.265 22:17:10 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:29.265 22:17:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.265 22:17:10 -- common/autotest_common.sh@10 -- # set +x 00:25:29.265 22:17:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.265 22:17:10 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:29.265 22:17:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.265 22:17:10 -- common/autotest_common.sh@10 -- # set +x 00:25:29.265 22:17:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.265 22:17:10 -- target/dif.sh@109 -- # NULL_DIF=2 00:25:29.265 22:17:10 -- target/dif.sh@109 -- # bs=4k 00:25:29.265 22:17:10 -- target/dif.sh@109 -- # numjobs=8 00:25:29.265 22:17:10 -- target/dif.sh@109 -- # iodepth=16 00:25:29.265 22:17:10 -- target/dif.sh@109 -- # runtime= 00:25:29.265 22:17:10 -- target/dif.sh@109 -- # files=2 00:25:29.265 22:17:10 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:25:29.265 22:17:10 -- target/dif.sh@28 -- # local sub 00:25:29.265 22:17:10 -- target/dif.sh@30 -- # for sub in "$@" 00:25:29.265 22:17:10 -- target/dif.sh@31 -- # create_subsystem 0 00:25:29.265 22:17:10 -- target/dif.sh@18 -- # local sub_id=0 00:25:29.265 22:17:10 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:25:29.265 22:17:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.265 22:17:10 -- common/autotest_common.sh@10 -- # set +x 00:25:29.265 bdev_null0 00:25:29.265 22:17:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.265 22:17:10 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:29.265 22:17:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.265 22:17:10 -- common/autotest_common.sh@10 -- # set +x 00:25:29.265 22:17:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.265 22:17:10 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:29.265 22:17:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.265 22:17:10 -- common/autotest_common.sh@10 -- # set +x 00:25:29.265 22:17:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.265 22:17:10 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:29.265 22:17:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.265 22:17:10 -- common/autotest_common.sh@10 -- # set +x 00:25:29.265 [2024-04-24 22:17:10.892110] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:29.265 22:17:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.265 22:17:10 -- target/dif.sh@30 -- # for sub in "$@" 00:25:29.265 22:17:10 -- target/dif.sh@31 -- # create_subsystem 1 00:25:29.265 22:17:10 -- target/dif.sh@18 -- # local sub_id=1 00:25:29.265 22:17:10 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:25:29.265 22:17:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.265 22:17:10 -- common/autotest_common.sh@10 -- # set +x 00:25:29.265 bdev_null1 00:25:29.265 22:17:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.265 22:17:10 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:29.265 22:17:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.265 22:17:10 -- common/autotest_common.sh@10 -- # set +x 00:25:29.265 22:17:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.265 22:17:10 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:29.265 22:17:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.265 22:17:10 -- common/autotest_common.sh@10 -- # set +x 00:25:29.265 22:17:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.265 22:17:10 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:29.265 22:17:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.265 22:17:10 -- common/autotest_common.sh@10 -- # set +x 00:25:29.265 22:17:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.265 22:17:10 -- target/dif.sh@30 -- # for sub in "$@" 00:25:29.265 22:17:10 -- target/dif.sh@31 -- # create_subsystem 2 00:25:29.265 22:17:10 -- target/dif.sh@18 -- # local sub_id=2 00:25:29.265 22:17:10 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:25:29.265 22:17:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.265 22:17:10 -- common/autotest_common.sh@10 -- # set +x 00:25:29.265 bdev_null2 00:25:29.265 22:17:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.265 22:17:10 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:25:29.265 22:17:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.265 22:17:10 -- common/autotest_common.sh@10 -- # set +x 00:25:29.265 22:17:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.265 22:17:10 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:25:29.265 22:17:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.265 22:17:10 -- common/autotest_common.sh@10 -- # set +x 00:25:29.265 22:17:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.265 22:17:10 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:29.265 22:17:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.265 22:17:10 -- common/autotest_common.sh@10 -- # set +x 00:25:29.265 22:17:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.265 22:17:10 -- target/dif.sh@112 -- # fio /dev/fd/62 00:25:29.265 22:17:10 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:25:29.265 22:17:10 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:29.265 22:17:10 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:29.265 22:17:10 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:25:29.265 22:17:10 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:25:29.265 22:17:10 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:29.265 22:17:10 -- common/autotest_common.sh@1325 -- # local sanitizers 00:25:29.265 22:17:10 -- nvmf/common.sh@521 -- # config=() 00:25:29.265 22:17:10 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:29.265 22:17:10 -- target/dif.sh@82 -- # gen_fio_conf 00:25:29.265 22:17:10 -- common/autotest_common.sh@1327 -- # shift 00:25:29.265 22:17:10 -- nvmf/common.sh@521 -- # local subsystem config 00:25:29.265 22:17:10 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:25:29.265 22:17:10 -- target/dif.sh@54 -- # local file 00:25:29.265 22:17:10 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:29.265 22:17:10 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:29.265 22:17:10 -- target/dif.sh@56 -- # cat 00:25:29.265 22:17:10 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:29.265 { 00:25:29.265 "params": { 00:25:29.265 "name": "Nvme$subsystem", 00:25:29.265 "trtype": "$TEST_TRANSPORT", 00:25:29.265 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.265 "adrfam": "ipv4", 00:25:29.265 "trsvcid": "$NVMF_PORT", 00:25:29.265 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.265 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.265 "hdgst": ${hdgst:-false}, 00:25:29.265 "ddgst": ${ddgst:-false} 00:25:29.265 }, 00:25:29.265 "method": "bdev_nvme_attach_controller" 00:25:29.265 } 00:25:29.265 EOF 00:25:29.265 )") 00:25:29.265 22:17:10 -- nvmf/common.sh@543 -- # cat 00:25:29.265 22:17:10 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:29.265 22:17:10 -- common/autotest_common.sh@1331 -- # grep libasan 00:25:29.265 22:17:10 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:29.265 22:17:10 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:29.265 22:17:10 -- target/dif.sh@72 -- # (( file <= files )) 00:25:29.266 22:17:10 -- target/dif.sh@73 -- # cat 00:25:29.266 22:17:10 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:29.266 22:17:10 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:29.266 { 00:25:29.266 "params": { 00:25:29.266 "name": "Nvme$subsystem", 00:25:29.266 "trtype": "$TEST_TRANSPORT", 00:25:29.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.266 "adrfam": "ipv4", 00:25:29.266 "trsvcid": "$NVMF_PORT", 00:25:29.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.266 "hdgst": ${hdgst:-false}, 00:25:29.266 "ddgst": ${ddgst:-false} 00:25:29.266 }, 00:25:29.266 "method": "bdev_nvme_attach_controller" 00:25:29.266 } 00:25:29.266 EOF 00:25:29.266 )") 00:25:29.266 22:17:10 -- target/dif.sh@72 -- # (( file++ )) 00:25:29.266 22:17:10 -- target/dif.sh@72 -- # (( file <= files )) 00:25:29.266 22:17:10 -- nvmf/common.sh@543 -- # cat 00:25:29.266 22:17:10 -- target/dif.sh@73 -- # cat 00:25:29.266 22:17:10 -- target/dif.sh@72 -- # (( file++ )) 00:25:29.266 22:17:10 -- target/dif.sh@72 -- # (( file <= files )) 00:25:29.266 22:17:10 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:29.266 22:17:10 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:29.266 { 00:25:29.266 "params": { 00:25:29.266 "name": "Nvme$subsystem", 00:25:29.266 "trtype": "$TEST_TRANSPORT", 00:25:29.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.266 "adrfam": "ipv4", 00:25:29.266 "trsvcid": "$NVMF_PORT", 00:25:29.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.266 "hdgst": ${hdgst:-false}, 00:25:29.266 "ddgst": ${ddgst:-false} 00:25:29.266 }, 00:25:29.266 "method": "bdev_nvme_attach_controller" 00:25:29.266 } 00:25:29.266 EOF 00:25:29.266 )") 00:25:29.266 22:17:10 -- nvmf/common.sh@543 -- # cat 00:25:29.266 22:17:10 -- nvmf/common.sh@545 -- # jq . 00:25:29.266 22:17:10 -- nvmf/common.sh@546 -- # IFS=, 00:25:29.266 22:17:10 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:29.266 "params": { 00:25:29.266 "name": "Nvme0", 00:25:29.266 "trtype": "tcp", 00:25:29.266 "traddr": "10.0.0.2", 00:25:29.266 "adrfam": "ipv4", 00:25:29.266 "trsvcid": "4420", 00:25:29.266 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:29.266 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:29.266 "hdgst": false, 00:25:29.266 "ddgst": false 00:25:29.266 }, 00:25:29.266 "method": "bdev_nvme_attach_controller" 00:25:29.266 },{ 00:25:29.266 "params": { 00:25:29.266 "name": "Nvme1", 00:25:29.266 "trtype": "tcp", 00:25:29.266 "traddr": "10.0.0.2", 00:25:29.266 "adrfam": "ipv4", 00:25:29.266 "trsvcid": "4420", 00:25:29.266 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:29.266 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:29.266 "hdgst": false, 00:25:29.266 "ddgst": false 00:25:29.266 }, 00:25:29.266 "method": "bdev_nvme_attach_controller" 00:25:29.266 },{ 00:25:29.266 "params": { 00:25:29.266 "name": "Nvme2", 00:25:29.266 "trtype": "tcp", 00:25:29.266 "traddr": "10.0.0.2", 00:25:29.266 "adrfam": "ipv4", 00:25:29.266 "trsvcid": "4420", 00:25:29.266 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:29.266 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:29.266 "hdgst": false, 00:25:29.266 "ddgst": false 00:25:29.266 }, 00:25:29.266 "method": "bdev_nvme_attach_controller" 00:25:29.266 }' 00:25:29.266 22:17:10 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:29.266 22:17:10 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:29.266 22:17:10 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:29.266 22:17:10 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:29.266 22:17:10 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:25:29.266 22:17:10 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:29.266 22:17:11 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:29.266 22:17:11 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:29.266 22:17:11 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:29.266 22:17:11 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:29.266 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:29.266 ... 00:25:29.266 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:29.266 ... 00:25:29.266 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:29.266 ... 00:25:29.266 fio-3.35 00:25:29.266 Starting 24 threads 00:25:29.266 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.470 00:25:41.470 filename0: (groupid=0, jobs=1): err= 0: pid=4052769: Wed Apr 24 22:17:22 2024 00:25:41.470 read: IOPS=367, BW=1469KiB/s (1505kB/s)(14.4MiB/10018msec) 00:25:41.470 slat (usec): min=8, max=114, avg=67.49, stdev=24.45 00:25:41.470 clat (msec): min=24, max=396, avg=42.97, stdev=36.29 00:25:41.470 lat (msec): min=24, max=396, avg=43.03, stdev=36.29 00:25:41.470 clat percentiles (msec): 00:25:41.471 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37], 00:25:41.471 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:25:41.471 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 38], 95.00th=[ 38], 00:25:41.471 | 99.00th=[ 239], 99.50th=[ 305], 99.90th=[ 317], 99.95th=[ 397], 00:25:41.471 | 99.99th=[ 397] 00:25:41.471 bw ( KiB/s): min= 128, max= 1792, per=4.15%, avg=1464.45, stdev=548.13, samples=20 00:25:41.471 iops : min= 32, max= 448, avg=366.10, stdev=137.03, samples=20 00:25:41.471 lat (msec) : 50=96.58%, 100=0.38%, 250=2.07%, 500=0.98% 00:25:41.471 cpu : usr=97.98%, sys=1.54%, ctx=44, majf=0, minf=33 00:25:41.471 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:25:41.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.471 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.471 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.471 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:41.471 filename0: (groupid=0, jobs=1): err= 0: pid=4052770: Wed Apr 24 22:17:22 2024 00:25:41.471 read: IOPS=371, BW=1484KiB/s (1520kB/s)(14.5MiB/10005msec) 00:25:41.471 slat (usec): min=4, max=145, avg=23.10, stdev=13.74 00:25:41.471 clat (msec): min=24, max=314, avg=42.93, stdev=31.04 00:25:41.471 lat (msec): min=24, max=314, avg=42.96, stdev=31.04 00:25:41.471 clat percentiles (msec): 00:25:41.471 | 1.00th=[ 37], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 37], 00:25:41.471 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:25:41.471 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:25:41.471 | 99.00th=[ 218], 99.50th=[ 222], 99.90th=[ 249], 99.95th=[ 313], 00:25:41.471 | 99.99th=[ 313] 00:25:41.471 bw ( KiB/s): min= 256, max= 1792, per=4.16%, avg=1468.63, stdev=523.20, samples=19 00:25:41.471 iops : min= 64, max= 448, avg=367.16, stdev=130.80, samples=19 00:25:41.471 lat (msec) : 50=96.12%, 100=0.43%, 250=3.39%, 500=0.05% 00:25:41.471 cpu : usr=98.11%, sys=1.50%, ctx=14, majf=0, minf=24 00:25:41.471 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:25:41.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.471 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.471 issued rwts: total=3712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.471 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:41.471 filename0: (groupid=0, jobs=1): err= 0: pid=4052771: Wed Apr 24 22:17:22 2024 00:25:41.471 read: IOPS=367, BW=1470KiB/s (1505kB/s)(14.4MiB/10014msec) 00:25:41.471 slat (usec): min=9, max=103, avg=27.89, stdev=11.34 00:25:41.471 clat (msec): min=26, max=313, avg=43.27, stdev=35.69 00:25:41.471 lat (msec): min=26, max=313, avg=43.30, stdev=35.70 00:25:41.471 clat percentiles (msec): 00:25:41.471 | 1.00th=[ 37], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 37], 00:25:41.471 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:25:41.471 | 70.00th=[ 37], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:25:41.471 | 99.00th=[ 239], 99.50th=[ 292], 99.90th=[ 300], 99.95th=[ 313], 00:25:41.471 | 99.99th=[ 313] 00:25:41.471 bw ( KiB/s): min= 128, max= 1792, per=4.15%, avg=1465.25, stdev=548.42, samples=20 00:25:41.471 iops : min= 32, max= 448, avg=366.30, stdev=137.10, samples=20 00:25:41.471 lat (msec) : 50=96.52%, 100=0.43%, 250=2.07%, 500=0.98% 00:25:41.471 cpu : usr=97.01%, sys=2.07%, ctx=153, majf=0, minf=38 00:25:41.471 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:25:41.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.471 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.471 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.471 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:41.471 filename0: (groupid=0, jobs=1): err= 0: pid=4052772: Wed Apr 24 22:17:22 2024 00:25:41.471 read: IOPS=367, BW=1471KiB/s (1506kB/s)(14.4MiB/10007msec) 00:25:41.471 slat (usec): min=8, max=122, avg=72.15, stdev=21.82 00:25:41.471 clat (msec): min=24, max=386, avg=42.86, stdev=35.65 00:25:41.471 lat (msec): min=24, max=386, avg=42.93, stdev=35.65 00:25:41.471 clat percentiles (msec): 00:25:41.471 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37], 00:25:41.471 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:25:41.471 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 38], 00:25:41.471 | 99.00th=[ 239], 99.50th=[ 275], 99.90th=[ 296], 99.95th=[ 388], 00:25:41.471 | 99.99th=[ 388] 00:25:41.471 bw ( KiB/s): min= 128, max= 1792, per=4.12%, avg=1455.16, stdev=554.63, samples=19 00:25:41.471 iops : min= 32, max= 448, avg=363.79, stdev=138.66, samples=19 00:25:41.471 lat (msec) : 50=96.58%, 100=0.38%, 250=2.23%, 500=0.82% 00:25:41.471 cpu : usr=97.83%, sys=1.64%, ctx=104, majf=0, minf=33 00:25:41.471 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:25:41.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.471 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.471 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.471 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:41.471 filename0: (groupid=0, jobs=1): err= 0: pid=4052773: Wed Apr 24 22:17:22 2024 00:25:41.471 read: IOPS=367, BW=1468KiB/s (1504kB/s)(14.4MiB/10025msec) 00:25:41.471 slat (usec): min=13, max=132, avg=75.11, stdev=19.09 00:25:41.471 clat (msec): min=32, max=275, avg=42.92, stdev=34.12 00:25:41.471 lat (msec): min=32, max=275, avg=42.99, stdev=34.12 00:25:41.471 clat percentiles (msec): 00:25:41.471 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37], 00:25:41.471 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:25:41.471 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 38], 00:25:41.471 | 99.00th=[ 222], 99.50th=[ 264], 99.90th=[ 275], 99.95th=[ 275], 00:25:41.471 | 99.99th=[ 275] 00:25:41.471 bw ( KiB/s): min= 256, max= 1792, per=4.15%, avg=1465.60, stdev=546.97, samples=20 00:25:41.471 iops : min= 64, max= 448, avg=366.40, stdev=136.74, samples=20 00:25:41.471 lat (msec) : 50=96.52%, 250=2.61%, 500=0.87% 00:25:41.471 cpu : usr=98.23%, sys=1.31%, ctx=31, majf=0, minf=33 00:25:41.471 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:25:41.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.471 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.471 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.471 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:41.471 filename0: (groupid=0, jobs=1): err= 0: pid=4052774: Wed Apr 24 22:17:22 2024 00:25:41.471 read: IOPS=367, BW=1468KiB/s (1504kB/s)(14.4MiB/10025msec) 00:25:41.471 slat (nsec): min=7274, max=65274, avg=31016.67, stdev=9351.47 00:25:41.471 clat (msec): min=27, max=314, avg=43.30, stdev=34.85 00:25:41.471 lat (msec): min=27, max=314, avg=43.34, stdev=34.84 00:25:41.471 clat percentiles (msec): 00:25:41.471 | 1.00th=[ 37], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 37], 00:25:41.471 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:25:41.471 | 70.00th=[ 37], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:25:41.471 | 99.00th=[ 218], 99.50th=[ 284], 99.90th=[ 313], 99.95th=[ 313], 00:25:41.471 | 99.99th=[ 313] 00:25:41.471 bw ( KiB/s): min= 255, max= 1792, per=4.15%, avg=1465.55, stdev=547.09, samples=20 00:25:41.471 iops : min= 63, max= 448, avg=366.35, stdev=136.86, samples=20 00:25:41.471 lat (msec) : 50=96.52%, 250=2.55%, 500=0.92% 00:25:41.471 cpu : usr=95.74%, sys=2.70%, ctx=105, majf=0, minf=27 00:25:41.471 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:25:41.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.471 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.471 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.471 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:41.471 filename0: (groupid=0, jobs=1): err= 0: pid=4052775: Wed Apr 24 22:17:22 2024 00:25:41.471 read: IOPS=367, BW=1468KiB/s (1504kB/s)(14.4MiB/10025msec) 00:25:41.471 slat (usec): min=6, max=114, avg=62.87, stdev=21.59 00:25:41.471 clat (msec): min=26, max=344, avg=43.02, stdev=33.74 00:25:41.471 lat (msec): min=26, max=344, avg=43.08, stdev=33.74 00:25:41.471 clat percentiles (msec): 00:25:41.471 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37], 00:25:41.471 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:25:41.471 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 38], 95.00th=[ 39], 00:25:41.471 | 99.00th=[ 220], 99.50th=[ 249], 99.90th=[ 288], 99.95th=[ 347], 00:25:41.471 | 99.99th=[ 347] 00:25:41.471 bw ( KiB/s): min= 256, max= 1792, per=4.15%, avg=1463.65, stdev=546.30, samples=20 00:25:41.471 iops : min= 64, max= 448, avg=365.90, stdev=136.57, samples=20 00:25:41.471 lat (msec) : 50=96.52%, 250=3.26%, 500=0.22% 00:25:41.471 cpu : usr=98.03%, sys=1.51%, ctx=19, majf=0, minf=40 00:25:41.471 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:25:41.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.471 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.471 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.471 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:41.471 filename0: (groupid=0, jobs=1): err= 0: pid=4052776: Wed Apr 24 22:17:22 2024 00:25:41.471 read: IOPS=366, BW=1467KiB/s (1503kB/s)(14.4MiB/10032msec) 00:25:41.471 slat (usec): min=4, max=133, avg=55.89, stdev=26.31 00:25:41.471 clat (msec): min=32, max=321, avg=43.13, stdev=34.38 00:25:41.471 lat (msec): min=32, max=321, avg=43.19, stdev=34.38 00:25:41.471 clat percentiles (msec): 00:25:41.471 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 37], 00:25:41.471 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:25:41.471 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 38], 95.00th=[ 38], 00:25:41.471 | 99.00th=[ 222], 99.50th=[ 271], 99.90th=[ 275], 99.95th=[ 321], 00:25:41.471 | 99.99th=[ 321] 00:25:41.471 bw ( KiB/s): min= 240, max= 1792, per=4.15%, avg=1464.15, stdev=546.13, samples=20 00:25:41.471 iops : min= 60, max= 448, avg=366.00, stdev=136.51, samples=20 00:25:41.471 lat (msec) : 50=96.52%, 250=2.61%, 500=0.87% 00:25:41.471 cpu : usr=96.78%, sys=2.07%, ctx=49, majf=0, minf=28 00:25:41.471 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:25:41.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.471 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.471 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.472 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:41.472 filename1: (groupid=0, jobs=1): err= 0: pid=4052777: Wed Apr 24 22:17:22 2024 00:25:41.472 read: IOPS=367, BW=1469KiB/s (1504kB/s)(14.4MiB/10022msec) 00:25:41.472 slat (usec): min=9, max=118, avg=33.04, stdev= 8.69 00:25:41.472 clat (msec): min=32, max=276, avg=43.26, stdev=33.97 00:25:41.472 lat (msec): min=32, max=276, avg=43.29, stdev=33.97 00:25:41.472 clat percentiles (msec): 00:25:41.472 | 1.00th=[ 37], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 37], 00:25:41.472 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:25:41.472 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 38], 95.00th=[ 39], 00:25:41.472 | 99.00th=[ 222], 99.50th=[ 259], 99.90th=[ 275], 99.95th=[ 275], 00:25:41.472 | 99.99th=[ 275] 00:25:41.472 bw ( KiB/s): min= 256, max= 1792, per=4.16%, avg=1469.90, stdev=548.95, samples=20 00:25:41.472 iops : min= 64, max= 448, avg=367.45, stdev=137.22, samples=20 00:25:41.472 lat (msec) : 50=96.52%, 250=2.61%, 500=0.87% 00:25:41.472 cpu : usr=96.47%, sys=2.25%, ctx=60, majf=0, minf=27 00:25:41.472 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:25:41.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.472 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.472 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.472 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:41.472 filename1: (groupid=0, jobs=1): err= 0: pid=4052778: Wed Apr 24 22:17:22 2024 00:25:41.472 read: IOPS=369, BW=1478KiB/s (1513kB/s)(14.5MiB/10022msec) 00:25:41.472 slat (usec): min=8, max=123, avg=30.93, stdev= 8.82 00:25:41.472 clat (msec): min=26, max=266, avg=43.04, stdev=30.50 00:25:41.472 lat (msec): min=26, max=266, avg=43.07, stdev=30.50 00:25:41.472 clat percentiles (msec): 00:25:41.472 | 1.00th=[ 36], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 37], 00:25:41.472 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:25:41.472 | 70.00th=[ 37], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:25:41.472 | 99.00th=[ 220], 99.50th=[ 243], 99.90th=[ 249], 99.95th=[ 268], 00:25:41.472 | 99.99th=[ 268] 00:25:41.472 bw ( KiB/s): min= 256, max= 1792, per=4.17%, avg=1472.75, stdev=526.99, samples=20 00:25:41.472 iops : min= 64, max= 448, avg=368.15, stdev=131.74, samples=20 00:25:41.472 lat (msec) : 50=95.95%, 250=4.00%, 500=0.05% 00:25:41.472 cpu : usr=97.56%, sys=1.88%, ctx=48, majf=0, minf=36 00:25:41.472 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:25:41.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.472 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.472 issued rwts: total=3702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.472 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:41.472 filename1: (groupid=0, jobs=1): err= 0: pid=4052779: Wed Apr 24 22:17:22 2024 00:25:41.472 read: IOPS=367, BW=1470KiB/s (1506kB/s)(14.4MiB/10012msec) 00:25:41.472 slat (usec): min=7, max=123, avg=23.51, stdev= 9.45 00:25:41.472 clat (msec): min=26, max=274, avg=43.33, stdev=34.02 00:25:41.472 lat (msec): min=26, max=274, avg=43.35, stdev=34.02 00:25:41.472 clat percentiles (msec): 00:25:41.472 | 1.00th=[ 37], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 37], 00:25:41.472 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:25:41.472 | 70.00th=[ 37], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:25:41.472 | 99.00th=[ 234], 99.50th=[ 264], 99.90th=[ 275], 99.95th=[ 275], 00:25:41.472 | 99.99th=[ 275] 00:25:41.472 bw ( KiB/s): min= 239, max= 1792, per=4.15%, avg=1465.55, stdev=539.18, samples=20 00:25:41.472 iops : min= 59, max= 448, avg=366.35, stdev=134.88, samples=20 00:25:41.472 lat (msec) : 50=96.52%, 100=0.05%, 250=2.55%, 500=0.87% 00:25:41.472 cpu : usr=98.01%, sys=1.59%, ctx=13, majf=0, minf=40 00:25:41.472 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:25:41.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.472 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.472 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.472 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:41.472 filename1: (groupid=0, jobs=1): err= 0: pid=4052780: Wed Apr 24 22:17:22 2024 00:25:41.472 read: IOPS=366, BW=1467KiB/s (1502kB/s)(14.4MiB/10035msec) 00:25:41.472 slat (usec): min=4, max=124, avg=72.91, stdev=21.71 00:25:41.472 clat (msec): min=32, max=357, avg=42.98, stdev=34.72 00:25:41.472 lat (msec): min=32, max=357, avg=43.06, stdev=34.72 00:25:41.472 clat percentiles (msec): 00:25:41.472 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37], 00:25:41.472 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:25:41.472 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 38], 95.00th=[ 38], 00:25:41.472 | 99.00th=[ 222], 99.50th=[ 271], 99.90th=[ 279], 99.95th=[ 359], 00:25:41.472 | 99.99th=[ 359] 00:25:41.472 bw ( KiB/s): min= 256, max= 1792, per=4.15%, avg=1463.80, stdev=545.90, samples=20 00:25:41.472 iops : min= 64, max= 448, avg=365.95, stdev=136.48, samples=20 00:25:41.472 lat (msec) : 50=96.58%, 250=2.50%, 500=0.92% 00:25:41.472 cpu : usr=97.45%, sys=1.80%, ctx=100, majf=0, minf=29 00:25:41.472 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:25:41.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.472 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.472 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.472 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:41.472 filename1: (groupid=0, jobs=1): err= 0: pid=4052781: Wed Apr 24 22:17:22 2024 00:25:41.472 read: IOPS=370, BW=1481KiB/s (1516kB/s)(14.5MiB/10026msec) 00:25:41.472 slat (usec): min=4, max=161, avg=32.21, stdev=11.72 00:25:41.472 clat (msec): min=24, max=248, avg=42.92, stdev=31.50 00:25:41.472 lat (msec): min=24, max=248, avg=42.95, stdev=31.51 00:25:41.472 clat percentiles (msec): 00:25:41.472 | 1.00th=[ 36], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 37], 00:25:41.472 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:25:41.472 | 70.00th=[ 37], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:25:41.472 | 99.00th=[ 218], 99.50th=[ 222], 99.90th=[ 249], 99.95th=[ 249], 00:25:41.472 | 99.99th=[ 249] 00:25:41.472 bw ( KiB/s): min= 256, max= 1792, per=4.19%, avg=1478.40, stdev=519.48, samples=20 00:25:41.472 iops : min= 64, max= 448, avg=369.60, stdev=129.87, samples=20 00:25:41.472 lat (msec) : 50=96.12%, 100=0.43%, 250=3.45% 00:25:41.472 cpu : usr=97.96%, sys=1.60%, ctx=16, majf=0, minf=38 00:25:41.472 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:25:41.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.472 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.472 issued rwts: total=3712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.472 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:41.472 filename1: (groupid=0, jobs=1): err= 0: pid=4052782: Wed Apr 24 22:17:22 2024 00:25:41.472 read: IOPS=367, BW=1470KiB/s (1505kB/s)(14.4MiB/10016msec) 00:25:41.472 slat (usec): min=11, max=124, avg=73.82, stdev=18.84 00:25:41.472 clat (msec): min=26, max=303, avg=42.89, stdev=35.67 00:25:41.472 lat (msec): min=26, max=303, avg=42.96, stdev=35.67 00:25:41.472 clat percentiles (msec): 00:25:41.472 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37], 00:25:41.472 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:25:41.472 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 38], 00:25:41.472 | 99.00th=[ 239], 99.50th=[ 275], 99.90th=[ 305], 99.95th=[ 305], 00:25:41.472 | 99.99th=[ 305] 00:25:41.472 bw ( KiB/s): min= 128, max= 1792, per=4.15%, avg=1464.75, stdev=548.24, samples=20 00:25:41.472 iops : min= 32, max= 448, avg=366.15, stdev=137.05, samples=20 00:25:41.472 lat (msec) : 50=96.52%, 100=0.43%, 250=2.17%, 500=0.87% 00:25:41.472 cpu : usr=98.27%, sys=1.29%, ctx=16, majf=0, minf=35 00:25:41.472 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:25:41.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.472 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.472 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.472 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:41.472 filename1: (groupid=0, jobs=1): err= 0: pid=4052783: Wed Apr 24 22:17:22 2024 00:25:41.472 read: IOPS=367, BW=1471KiB/s (1506kB/s)(14.4MiB/10008msec) 00:25:41.472 slat (nsec): min=8453, max=62921, avg=25442.71, stdev=10568.70 00:25:41.472 clat (msec): min=8, max=295, avg=43.27, stdev=35.40 00:25:41.472 lat (msec): min=8, max=295, avg=43.29, stdev=35.40 00:25:41.472 clat percentiles (msec): 00:25:41.472 | 1.00th=[ 37], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 37], 00:25:41.472 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:25:41.472 | 70.00th=[ 37], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:25:41.472 | 99.00th=[ 239], 99.50th=[ 275], 99.90th=[ 296], 99.95th=[ 296], 00:25:41.472 | 99.99th=[ 296] 00:25:41.472 bw ( KiB/s): min= 128, max= 1792, per=4.12%, avg=1454.32, stdev=552.87, samples=19 00:25:41.472 iops : min= 32, max= 448, avg=363.58, stdev=138.22, samples=19 00:25:41.472 lat (msec) : 10=0.05%, 50=96.41%, 100=0.49%, 250=2.17%, 500=0.87% 00:25:41.472 cpu : usr=95.56%, sys=2.72%, ctx=123, majf=0, minf=31 00:25:41.472 IO depths : 1=4.9%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.6%, 32=0.0%, >=64=0.0% 00:25:41.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.472 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.472 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.472 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:41.472 filename1: (groupid=0, jobs=1): err= 0: pid=4052784: Wed Apr 24 22:17:22 2024 00:25:41.472 read: IOPS=366, BW=1464KiB/s (1500kB/s)(14.3MiB/10008msec) 00:25:41.472 slat (usec): min=14, max=119, avg=64.66, stdev=22.77 00:25:41.472 clat (msec): min=21, max=386, avg=43.32, stdev=37.61 00:25:41.472 lat (msec): min=21, max=386, avg=43.39, stdev=37.60 00:25:41.472 clat percentiles (msec): 00:25:41.472 | 1.00th=[ 29], 5.00th=[ 31], 10.00th=[ 36], 20.00th=[ 37], 00:25:41.472 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:25:41.472 | 70.00th=[ 37], 80.00th=[ 38], 90.00th=[ 41], 95.00th=[ 46], 00:25:41.472 | 99.00th=[ 222], 99.50th=[ 284], 99.90th=[ 388], 99.95th=[ 388], 00:25:41.472 | 99.99th=[ 388] 00:25:41.473 bw ( KiB/s): min= 128, max= 1776, per=4.10%, avg=1448.42, stdev=566.10, samples=19 00:25:41.473 iops : min= 32, max= 444, avg=362.11, stdev=141.52, samples=19 00:25:41.473 lat (msec) : 50=95.69%, 100=1.26%, 250=2.51%, 500=0.55% 00:25:41.473 cpu : usr=98.11%, sys=1.44%, ctx=14, majf=0, minf=34 00:25:41.473 IO depths : 1=0.2%, 2=3.7%, 4=15.2%, 8=66.7%, 16=14.2%, 32=0.0%, >=64=0.0% 00:25:41.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.473 complete : 0=0.0%, 4=92.1%, 8=4.2%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.473 issued rwts: total=3664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.473 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:41.473 filename2: (groupid=0, jobs=1): err= 0: pid=4052785: Wed Apr 24 22:17:22 2024 00:25:41.473 read: IOPS=367, BW=1469KiB/s (1504kB/s)(14.4MiB/10021msec) 00:25:41.473 slat (nsec): min=5970, max=62367, avg=30112.16, stdev=7640.02 00:25:41.473 clat (msec): min=32, max=276, avg=43.29, stdev=33.97 00:25:41.473 lat (msec): min=32, max=276, avg=43.32, stdev=33.97 00:25:41.473 clat percentiles (msec): 00:25:41.473 | 1.00th=[ 37], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 37], 00:25:41.473 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:25:41.473 | 70.00th=[ 37], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:25:41.473 | 99.00th=[ 222], 99.50th=[ 259], 99.90th=[ 275], 99.95th=[ 275], 00:25:41.473 | 99.99th=[ 275] 00:25:41.473 bw ( KiB/s): min= 256, max= 1792, per=4.12%, avg=1455.16, stdev=559.91, samples=19 00:25:41.473 iops : min= 64, max= 448, avg=363.79, stdev=139.98, samples=19 00:25:41.473 lat (msec) : 50=96.52%, 250=2.61%, 500=0.87% 00:25:41.473 cpu : usr=98.12%, sys=1.46%, ctx=6, majf=0, minf=34 00:25:41.473 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:25:41.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.473 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.473 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.473 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:41.473 filename2: (groupid=0, jobs=1): err= 0: pid=4052786: Wed Apr 24 22:17:22 2024 00:25:41.473 read: IOPS=367, BW=1470KiB/s (1505kB/s)(14.4MiB/10013msec) 00:25:41.473 slat (usec): min=9, max=115, avg=48.75, stdev=32.03 00:25:41.473 clat (msec): min=20, max=330, avg=43.10, stdev=35.73 00:25:41.473 lat (msec): min=20, max=330, avg=43.15, stdev=35.73 00:25:41.473 clat percentiles (msec): 00:25:41.473 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 37], 00:25:41.473 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:25:41.473 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 38], 95.00th=[ 38], 00:25:41.473 | 99.00th=[ 239], 99.50th=[ 292], 99.90th=[ 300], 99.95th=[ 330], 00:25:41.473 | 99.99th=[ 330] 00:25:41.473 bw ( KiB/s): min= 128, max= 1792, per=4.15%, avg=1465.25, stdev=549.25, samples=20 00:25:41.473 iops : min= 32, max= 448, avg=366.30, stdev=137.31, samples=20 00:25:41.473 lat (msec) : 50=96.47%, 100=0.49%, 250=2.12%, 500=0.92% 00:25:41.473 cpu : usr=97.86%, sys=1.70%, ctx=20, majf=0, minf=32 00:25:41.473 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:25:41.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.473 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.473 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.473 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:41.473 filename2: (groupid=0, jobs=1): err= 0: pid=4052787: Wed Apr 24 22:17:22 2024 00:25:41.473 read: IOPS=371, BW=1488KiB/s (1524kB/s)(14.5MiB/10001msec) 00:25:41.473 slat (usec): min=8, max=101, avg=30.79, stdev=10.90 00:25:41.473 clat (msec): min=24, max=224, avg=42.76, stdev=28.08 00:25:41.473 lat (msec): min=24, max=224, avg=42.79, stdev=28.08 00:25:41.473 clat percentiles (msec): 00:25:41.473 | 1.00th=[ 33], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 37], 00:25:41.473 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:25:41.473 | 70.00th=[ 37], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:25:41.473 | 99.00th=[ 197], 99.50th=[ 220], 99.90th=[ 226], 99.95th=[ 226], 00:25:41.473 | 99.99th=[ 226] 00:25:41.473 bw ( KiB/s): min= 256, max= 1792, per=4.17%, avg=1472.00, stdev=514.13, samples=19 00:25:41.473 iops : min= 64, max= 448, avg=368.00, stdev=128.53, samples=19 00:25:41.473 lat (msec) : 50=95.48%, 100=0.16%, 250=4.35% 00:25:41.473 cpu : usr=97.63%, sys=1.70%, ctx=98, majf=0, minf=58 00:25:41.473 IO depths : 1=6.0%, 2=12.0%, 4=24.2%, 8=51.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:25:41.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.473 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.473 issued rwts: total=3720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.473 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:41.473 filename2: (groupid=0, jobs=1): err= 0: pid=4052788: Wed Apr 24 22:17:22 2024 00:25:41.473 read: IOPS=367, BW=1470KiB/s (1506kB/s)(14.4MiB/10012msec) 00:25:41.473 slat (usec): min=5, max=101, avg=26.29, stdev= 8.04 00:25:41.473 clat (msec): min=32, max=261, avg=43.29, stdev=33.45 00:25:41.473 lat (msec): min=32, max=261, avg=43.32, stdev=33.45 00:25:41.473 clat percentiles (msec): 00:25:41.473 | 1.00th=[ 36], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 37], 00:25:41.473 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:25:41.473 | 70.00th=[ 37], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:25:41.473 | 99.00th=[ 222], 99.50th=[ 249], 99.90th=[ 262], 99.95th=[ 262], 00:25:41.473 | 99.99th=[ 262] 00:25:41.473 bw ( KiB/s): min= 256, max= 1792, per=4.15%, avg=1465.60, stdev=539.03, samples=20 00:25:41.473 iops : min= 64, max= 448, avg=366.40, stdev=134.76, samples=20 00:25:41.473 lat (msec) : 50=96.52%, 250=3.04%, 500=0.43% 00:25:41.473 cpu : usr=93.33%, sys=3.66%, ctx=216, majf=0, minf=38 00:25:41.473 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:25:41.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.473 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.473 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.473 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:41.473 filename2: (groupid=0, jobs=1): err= 0: pid=4052789: Wed Apr 24 22:17:22 2024 00:25:41.473 read: IOPS=374, BW=1497KiB/s (1533kB/s)(14.6MiB/10004msec) 00:25:41.473 slat (nsec): min=8982, max=58598, avg=18432.46, stdev=9445.57 00:25:41.473 clat (msec): min=27, max=213, avg=42.59, stdev=25.45 00:25:41.473 lat (msec): min=27, max=213, avg=42.61, stdev=25.45 00:25:41.473 clat percentiles (msec): 00:25:41.473 | 1.00th=[ 37], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 37], 00:25:41.473 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:25:41.473 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 49], 00:25:41.473 | 99.00th=[ 178], 99.50th=[ 182], 99.90th=[ 213], 99.95th=[ 213], 00:25:41.473 | 99.99th=[ 213] 00:25:41.473 bw ( KiB/s): min= 384, max= 1792, per=4.20%, avg=1482.16, stdev=490.68, samples=19 00:25:41.473 iops : min= 96, max= 448, avg=370.53, stdev=122.70, samples=19 00:25:41.473 lat (msec) : 50=95.30%, 100=0.43%, 250=4.27% 00:25:41.473 cpu : usr=97.78%, sys=1.65%, ctx=45, majf=0, minf=42 00:25:41.473 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:25:41.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.473 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.473 issued rwts: total=3744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.473 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:41.473 filename2: (groupid=0, jobs=1): err= 0: pid=4052790: Wed Apr 24 22:17:22 2024 00:25:41.473 read: IOPS=367, BW=1468KiB/s (1504kB/s)(14.4MiB/10025msec) 00:25:41.473 slat (usec): min=8, max=120, avg=67.71, stdev=22.83 00:25:41.473 clat (msec): min=32, max=314, avg=42.98, stdev=34.20 00:25:41.473 lat (msec): min=32, max=314, avg=43.05, stdev=34.19 00:25:41.473 clat percentiles (msec): 00:25:41.473 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37], 00:25:41.473 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:25:41.473 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 38], 95.00th=[ 38], 00:25:41.473 | 99.00th=[ 222], 99.50th=[ 264], 99.90th=[ 275], 99.95th=[ 313], 00:25:41.473 | 99.99th=[ 313] 00:25:41.473 bw ( KiB/s): min= 256, max= 1792, per=4.15%, avg=1465.60, stdev=546.97, samples=20 00:25:41.473 iops : min= 64, max= 448, avg=366.40, stdev=136.74, samples=20 00:25:41.473 lat (msec) : 50=96.52%, 250=2.61%, 500=0.87% 00:25:41.473 cpu : usr=98.10%, sys=1.44%, ctx=11, majf=0, minf=35 00:25:41.473 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:25:41.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.473 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.473 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.473 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:41.473 filename2: (groupid=0, jobs=1): err= 0: pid=4052791: Wed Apr 24 22:17:22 2024 00:25:41.473 read: IOPS=367, BW=1470KiB/s (1506kB/s)(14.4MiB/10011msec) 00:25:41.473 slat (nsec): min=9323, max=89005, avg=23765.90, stdev=10930.12 00:25:41.473 clat (msec): min=15, max=298, avg=43.33, stdev=35.44 00:25:41.473 lat (msec): min=15, max=298, avg=43.35, stdev=35.44 00:25:41.473 clat percentiles (msec): 00:25:41.473 | 1.00th=[ 37], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 37], 00:25:41.473 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:25:41.473 | 70.00th=[ 37], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 39], 00:25:41.473 | 99.00th=[ 236], 99.50th=[ 275], 99.90th=[ 300], 99.95th=[ 300], 00:25:41.473 | 99.99th=[ 300] 00:25:41.473 bw ( KiB/s): min= 128, max= 1792, per=4.15%, avg=1465.60, stdev=548.20, samples=20 00:25:41.473 iops : min= 32, max= 448, avg=366.40, stdev=137.05, samples=20 00:25:41.473 lat (msec) : 20=0.05%, 50=96.25%, 100=0.65%, 250=2.17%, 500=0.87% 00:25:41.473 cpu : usr=98.08%, sys=1.51%, ctx=19, majf=0, minf=30 00:25:41.473 IO depths : 1=4.8%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.7%, 32=0.0%, >=64=0.0% 00:25:41.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.473 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.473 issued rwts: total=3680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.473 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:41.473 filename2: (groupid=0, jobs=1): err= 0: pid=4052792: Wed Apr 24 22:17:22 2024 00:25:41.473 read: IOPS=368, BW=1476KiB/s (1511kB/s)(14.4MiB/10008msec) 00:25:41.473 slat (nsec): min=8160, max=96565, avg=24940.31, stdev=12834.89 00:25:41.473 clat (msec): min=16, max=463, avg=43.18, stdev=37.65 00:25:41.474 lat (msec): min=16, max=463, avg=43.20, stdev=37.65 00:25:41.474 clat percentiles (msec): 00:25:41.474 | 1.00th=[ 26], 5.00th=[ 33], 10.00th=[ 37], 20.00th=[ 37], 00:25:41.474 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:25:41.474 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 38], 95.00th=[ 44], 00:25:41.474 | 99.00th=[ 222], 99.50th=[ 284], 99.90th=[ 388], 99.95th=[ 464], 00:25:41.474 | 99.99th=[ 464] 00:25:41.474 bw ( KiB/s): min= 128, max= 1840, per=4.14%, avg=1460.21, stdev=573.30, samples=19 00:25:41.474 iops : min= 32, max= 460, avg=365.05, stdev=143.32, samples=19 00:25:41.474 lat (msec) : 20=0.22%, 50=96.10%, 100=0.65%, 250=2.49%, 500=0.54% 00:25:41.474 cpu : usr=94.49%, sys=3.10%, ctx=118, majf=0, minf=28 00:25:41.474 IO depths : 1=2.5%, 2=6.4%, 4=16.1%, 8=63.2%, 16=11.9%, 32=0.0%, >=64=0.0% 00:25:41.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.474 complete : 0=0.0%, 4=92.3%, 8=3.8%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.474 issued rwts: total=3692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.474 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:41.474 00:25:41.474 Run status group 0 (all jobs): 00:25:41.474 READ: bw=34.5MiB/s (36.1MB/s), 1464KiB/s-1497KiB/s (1500kB/s-1533kB/s), io=346MiB (363MB), run=10001-10035msec 00:25:41.474 22:17:22 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:25:41.474 22:17:22 -- target/dif.sh@43 -- # local sub 00:25:41.474 22:17:22 -- target/dif.sh@45 -- # for sub in "$@" 00:25:41.474 22:17:22 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:41.474 22:17:22 -- target/dif.sh@36 -- # local sub_id=0 00:25:41.474 22:17:22 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:41.474 22:17:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.474 22:17:22 -- common/autotest_common.sh@10 -- # set +x 00:25:41.474 22:17:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.474 22:17:22 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:41.474 22:17:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.474 22:17:22 -- common/autotest_common.sh@10 -- # set +x 00:25:41.474 22:17:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.474 22:17:22 -- target/dif.sh@45 -- # for sub in "$@" 00:25:41.474 22:17:22 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:41.474 22:17:22 -- target/dif.sh@36 -- # local sub_id=1 00:25:41.474 22:17:22 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:41.474 22:17:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.474 22:17:22 -- common/autotest_common.sh@10 -- # set +x 00:25:41.474 22:17:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.474 22:17:22 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:41.474 22:17:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.474 22:17:22 -- common/autotest_common.sh@10 -- # set +x 00:25:41.474 22:17:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.474 22:17:22 -- target/dif.sh@45 -- # for sub in "$@" 00:25:41.474 22:17:22 -- target/dif.sh@46 -- # destroy_subsystem 2 00:25:41.474 22:17:22 -- target/dif.sh@36 -- # local sub_id=2 00:25:41.474 22:17:22 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:41.474 22:17:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.474 22:17:22 -- common/autotest_common.sh@10 -- # set +x 00:25:41.474 22:17:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.474 22:17:22 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:25:41.474 22:17:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.474 22:17:22 -- common/autotest_common.sh@10 -- # set +x 00:25:41.474 22:17:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.474 22:17:22 -- target/dif.sh@115 -- # NULL_DIF=1 00:25:41.474 22:17:22 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:25:41.474 22:17:22 -- target/dif.sh@115 -- # numjobs=2 00:25:41.474 22:17:22 -- target/dif.sh@115 -- # iodepth=8 00:25:41.474 22:17:22 -- target/dif.sh@115 -- # runtime=5 00:25:41.474 22:17:22 -- target/dif.sh@115 -- # files=1 00:25:41.474 22:17:22 -- target/dif.sh@117 -- # create_subsystems 0 1 00:25:41.474 22:17:22 -- target/dif.sh@28 -- # local sub 00:25:41.474 22:17:22 -- target/dif.sh@30 -- # for sub in "$@" 00:25:41.474 22:17:22 -- target/dif.sh@31 -- # create_subsystem 0 00:25:41.474 22:17:22 -- target/dif.sh@18 -- # local sub_id=0 00:25:41.474 22:17:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:41.474 22:17:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.474 22:17:22 -- common/autotest_common.sh@10 -- # set +x 00:25:41.474 bdev_null0 00:25:41.474 22:17:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.474 22:17:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:41.474 22:17:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.474 22:17:22 -- common/autotest_common.sh@10 -- # set +x 00:25:41.474 22:17:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.474 22:17:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:41.474 22:17:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.474 22:17:22 -- common/autotest_common.sh@10 -- # set +x 00:25:41.474 22:17:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.474 22:17:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:41.474 22:17:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.474 22:17:22 -- common/autotest_common.sh@10 -- # set +x 00:25:41.474 [2024-04-24 22:17:22.527481] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.474 22:17:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.474 22:17:22 -- target/dif.sh@30 -- # for sub in "$@" 00:25:41.474 22:17:22 -- target/dif.sh@31 -- # create_subsystem 1 00:25:41.474 22:17:22 -- target/dif.sh@18 -- # local sub_id=1 00:25:41.474 22:17:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:41.474 22:17:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.474 22:17:22 -- common/autotest_common.sh@10 -- # set +x 00:25:41.474 bdev_null1 00:25:41.474 22:17:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.474 22:17:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:41.474 22:17:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.474 22:17:22 -- common/autotest_common.sh@10 -- # set +x 00:25:41.474 22:17:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.474 22:17:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:41.474 22:17:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.474 22:17:22 -- common/autotest_common.sh@10 -- # set +x 00:25:41.474 22:17:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.474 22:17:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:41.474 22:17:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.474 22:17:22 -- common/autotest_common.sh@10 -- # set +x 00:25:41.474 22:17:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.474 22:17:22 -- target/dif.sh@118 -- # fio /dev/fd/62 00:25:41.474 22:17:22 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:25:41.474 22:17:22 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:41.474 22:17:22 -- nvmf/common.sh@521 -- # config=() 00:25:41.474 22:17:22 -- nvmf/common.sh@521 -- # local subsystem config 00:25:41.474 22:17:22 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:41.474 22:17:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:41.474 22:17:22 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:41.474 22:17:22 -- target/dif.sh@82 -- # gen_fio_conf 00:25:41.474 22:17:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:41.474 { 00:25:41.474 "params": { 00:25:41.474 "name": "Nvme$subsystem", 00:25:41.474 "trtype": "$TEST_TRANSPORT", 00:25:41.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.474 "adrfam": "ipv4", 00:25:41.474 "trsvcid": "$NVMF_PORT", 00:25:41.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.474 "hdgst": ${hdgst:-false}, 00:25:41.474 "ddgst": ${ddgst:-false} 00:25:41.474 }, 00:25:41.474 "method": "bdev_nvme_attach_controller" 00:25:41.474 } 00:25:41.474 EOF 00:25:41.474 )") 00:25:41.474 22:17:22 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:25:41.474 22:17:22 -- target/dif.sh@54 -- # local file 00:25:41.474 22:17:22 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:41.474 22:17:22 -- target/dif.sh@56 -- # cat 00:25:41.474 22:17:22 -- common/autotest_common.sh@1325 -- # local sanitizers 00:25:41.474 22:17:22 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:41.474 22:17:22 -- common/autotest_common.sh@1327 -- # shift 00:25:41.474 22:17:22 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:25:41.474 22:17:22 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:41.474 22:17:22 -- nvmf/common.sh@543 -- # cat 00:25:41.474 22:17:22 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:41.474 22:17:22 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:41.474 22:17:22 -- target/dif.sh@72 -- # (( file <= files )) 00:25:41.474 22:17:22 -- common/autotest_common.sh@1331 -- # grep libasan 00:25:41.474 22:17:22 -- target/dif.sh@73 -- # cat 00:25:41.474 22:17:22 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:41.474 22:17:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:41.474 22:17:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:41.474 { 00:25:41.474 "params": { 00:25:41.474 "name": "Nvme$subsystem", 00:25:41.474 "trtype": "$TEST_TRANSPORT", 00:25:41.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.474 "adrfam": "ipv4", 00:25:41.474 "trsvcid": "$NVMF_PORT", 00:25:41.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.474 "hdgst": ${hdgst:-false}, 00:25:41.474 "ddgst": ${ddgst:-false} 00:25:41.474 }, 00:25:41.474 "method": "bdev_nvme_attach_controller" 00:25:41.474 } 00:25:41.474 EOF 00:25:41.474 )") 00:25:41.474 22:17:22 -- target/dif.sh@72 -- # (( file++ )) 00:25:41.474 22:17:22 -- nvmf/common.sh@543 -- # cat 00:25:41.475 22:17:22 -- target/dif.sh@72 -- # (( file <= files )) 00:25:41.475 22:17:22 -- nvmf/common.sh@545 -- # jq . 00:25:41.475 22:17:22 -- nvmf/common.sh@546 -- # IFS=, 00:25:41.475 22:17:22 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:41.475 "params": { 00:25:41.475 "name": "Nvme0", 00:25:41.475 "trtype": "tcp", 00:25:41.475 "traddr": "10.0.0.2", 00:25:41.475 "adrfam": "ipv4", 00:25:41.475 "trsvcid": "4420", 00:25:41.475 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:41.475 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:41.475 "hdgst": false, 00:25:41.475 "ddgst": false 00:25:41.475 }, 00:25:41.475 "method": "bdev_nvme_attach_controller" 00:25:41.475 },{ 00:25:41.475 "params": { 00:25:41.475 "name": "Nvme1", 00:25:41.475 "trtype": "tcp", 00:25:41.475 "traddr": "10.0.0.2", 00:25:41.475 "adrfam": "ipv4", 00:25:41.475 "trsvcid": "4420", 00:25:41.475 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:41.475 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:41.475 "hdgst": false, 00:25:41.475 "ddgst": false 00:25:41.475 }, 00:25:41.475 "method": "bdev_nvme_attach_controller" 00:25:41.475 }' 00:25:41.475 22:17:22 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:41.475 22:17:22 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:41.475 22:17:22 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:41.475 22:17:22 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:41.475 22:17:22 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:25:41.475 22:17:22 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:41.475 22:17:22 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:41.475 22:17:22 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:41.475 22:17:22 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:41.475 22:17:22 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:41.475 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:41.475 ... 00:25:41.475 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:41.475 ... 00:25:41.475 fio-3.35 00:25:41.475 Starting 4 threads 00:25:41.475 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.770 00:25:46.770 filename0: (groupid=0, jobs=1): err= 0: pid=4054175: Wed Apr 24 22:17:28 2024 00:25:46.770 read: IOPS=1701, BW=13.3MiB/s (13.9MB/s)(66.5MiB/5002msec) 00:25:46.770 slat (usec): min=4, max=100, avg=22.99, stdev=10.53 00:25:46.770 clat (usec): min=879, max=8284, avg=4613.41, stdev=419.18 00:25:46.770 lat (usec): min=894, max=8304, avg=4636.40, stdev=419.04 00:25:46.770 clat percentiles (usec): 00:25:46.771 | 1.00th=[ 3556], 5.00th=[ 4228], 10.00th=[ 4424], 20.00th=[ 4490], 00:25:46.771 | 30.00th=[ 4555], 40.00th=[ 4555], 50.00th=[ 4555], 60.00th=[ 4621], 00:25:46.771 | 70.00th=[ 4686], 80.00th=[ 4686], 90.00th=[ 4817], 95.00th=[ 5014], 00:25:46.771 | 99.00th=[ 6587], 99.50th=[ 6915], 99.90th=[ 7504], 99.95th=[ 7570], 00:25:46.771 | 99.99th=[ 8291] 00:25:46.771 bw ( KiB/s): min=13184, max=13824, per=24.95%, avg=13612.30, stdev=186.28, samples=10 00:25:46.771 iops : min= 1648, max= 1728, avg=1701.50, stdev=23.25, samples=10 00:25:46.771 lat (usec) : 1000=0.02% 00:25:46.771 lat (msec) : 2=0.16%, 4=3.05%, 10=96.76% 00:25:46.771 cpu : usr=94.56%, sys=4.60%, ctx=38, majf=0, minf=75 00:25:46.771 IO depths : 1=0.1%, 2=18.9%, 4=54.7%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:46.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.771 complete : 0=0.0%, 4=91.1%, 8=8.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.771 issued rwts: total=8512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.771 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:46.771 filename0: (groupid=0, jobs=1): err= 0: pid=4054176: Wed Apr 24 22:17:28 2024 00:25:46.771 read: IOPS=1723, BW=13.5MiB/s (14.1MB/s)(67.4MiB/5003msec) 00:25:46.771 slat (nsec): min=4224, max=89386, avg=22250.91, stdev=8363.65 00:25:46.771 clat (usec): min=1430, max=8372, avg=4559.33, stdev=451.28 00:25:46.771 lat (usec): min=1445, max=8407, avg=4581.58, stdev=451.99 00:25:46.771 clat percentiles (usec): 00:25:46.771 | 1.00th=[ 2835], 5.00th=[ 3818], 10.00th=[ 4293], 20.00th=[ 4490], 00:25:46.771 | 30.00th=[ 4490], 40.00th=[ 4555], 50.00th=[ 4555], 60.00th=[ 4621], 00:25:46.771 | 70.00th=[ 4621], 80.00th=[ 4686], 90.00th=[ 4817], 95.00th=[ 4883], 00:25:46.771 | 99.00th=[ 6521], 99.50th=[ 6980], 99.90th=[ 7898], 99.95th=[ 8029], 00:25:46.771 | 99.99th=[ 8356] 00:25:46.771 bw ( KiB/s): min=13312, max=14976, per=25.29%, avg=13795.20, stdev=497.62, samples=10 00:25:46.771 iops : min= 1664, max= 1872, avg=1724.40, stdev=62.20, samples=10 00:25:46.771 lat (msec) : 2=0.05%, 4=6.75%, 10=93.21% 00:25:46.771 cpu : usr=94.54%, sys=4.84%, ctx=12, majf=0, minf=112 00:25:46.771 IO depths : 1=0.1%, 2=19.6%, 4=54.5%, 8=25.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:46.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.771 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.771 issued rwts: total=8625,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.771 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:46.771 filename1: (groupid=0, jobs=1): err= 0: pid=4054177: Wed Apr 24 22:17:28 2024 00:25:46.771 read: IOPS=1697, BW=13.3MiB/s (13.9MB/s)(66.3MiB/5001msec) 00:25:46.771 slat (nsec): min=4158, max=60951, avg=23367.07, stdev=10504.75 00:25:46.771 clat (usec): min=923, max=49359, avg=4624.63, stdev=1417.97 00:25:46.771 lat (usec): min=944, max=49372, avg=4648.00, stdev=1417.62 00:25:46.771 clat percentiles (usec): 00:25:46.771 | 1.00th=[ 3392], 5.00th=[ 4113], 10.00th=[ 4424], 20.00th=[ 4490], 00:25:46.771 | 30.00th=[ 4490], 40.00th=[ 4555], 50.00th=[ 4555], 60.00th=[ 4621], 00:25:46.771 | 70.00th=[ 4621], 80.00th=[ 4686], 90.00th=[ 4817], 95.00th=[ 4948], 00:25:46.771 | 99.00th=[ 5866], 99.50th=[ 6390], 99.90th=[ 8094], 99.95th=[49546], 00:25:46.771 | 99.99th=[49546] 00:25:46.771 bw ( KiB/s): min=12864, max=14000, per=24.87%, avg=13566.22, stdev=349.49, samples=9 00:25:46.771 iops : min= 1608, max= 1750, avg=1695.78, stdev=43.69, samples=9 00:25:46.771 lat (usec) : 1000=0.01% 00:25:46.771 lat (msec) : 2=0.06%, 4=4.02%, 10=95.82%, 50=0.09% 00:25:46.771 cpu : usr=95.08%, sys=4.30%, ctx=32, majf=0, minf=83 00:25:46.771 IO depths : 1=0.1%, 2=19.6%, 4=54.0%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:46.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.771 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.771 issued rwts: total=8491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.771 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:46.771 filename1: (groupid=0, jobs=1): err= 0: pid=4054178: Wed Apr 24 22:17:28 2024 00:25:46.771 read: IOPS=1697, BW=13.3MiB/s (13.9MB/s)(66.3MiB/5002msec) 00:25:46.771 slat (nsec): min=4169, max=56079, avg=18346.36, stdev=9621.88 00:25:46.771 clat (usec): min=1175, max=8531, avg=4657.37, stdev=449.21 00:25:46.771 lat (usec): min=1195, max=8540, avg=4675.72, stdev=448.98 00:25:46.771 clat percentiles (usec): 00:25:46.771 | 1.00th=[ 3556], 5.00th=[ 4178], 10.00th=[ 4490], 20.00th=[ 4555], 00:25:46.771 | 30.00th=[ 4555], 40.00th=[ 4621], 50.00th=[ 4621], 60.00th=[ 4621], 00:25:46.771 | 70.00th=[ 4686], 80.00th=[ 4752], 90.00th=[ 4817], 95.00th=[ 4948], 00:25:46.771 | 99.00th=[ 7177], 99.50th=[ 7308], 99.90th=[ 8029], 99.95th=[ 8455], 00:25:46.771 | 99.99th=[ 8586] 00:25:46.771 bw ( KiB/s): min=13120, max=13984, per=24.90%, avg=13583.40, stdev=261.83, samples=10 00:25:46.771 iops : min= 1640, max= 1748, avg=1697.90, stdev=32.74, samples=10 00:25:46.771 lat (msec) : 2=0.01%, 4=3.86%, 10=96.13% 00:25:46.771 cpu : usr=95.18%, sys=4.30%, ctx=8, majf=0, minf=107 00:25:46.771 IO depths : 1=0.3%, 2=5.0%, 4=69.3%, 8=25.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:46.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.771 complete : 0=0.0%, 4=90.7%, 8=9.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.771 issued rwts: total=8491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.771 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:46.771 00:25:46.771 Run status group 0 (all jobs): 00:25:46.771 READ: bw=53.3MiB/s (55.9MB/s), 13.3MiB/s-13.5MiB/s (13.9MB/s-14.1MB/s), io=267MiB (280MB), run=5001-5003msec 00:25:47.030 22:17:29 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:25:47.030 22:17:29 -- target/dif.sh@43 -- # local sub 00:25:47.030 22:17:29 -- target/dif.sh@45 -- # for sub in "$@" 00:25:47.030 22:17:29 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:47.030 22:17:29 -- target/dif.sh@36 -- # local sub_id=0 00:25:47.030 22:17:29 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:47.030 22:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.030 22:17:29 -- common/autotest_common.sh@10 -- # set +x 00:25:47.030 22:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.030 22:17:29 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:47.030 22:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.030 22:17:29 -- common/autotest_common.sh@10 -- # set +x 00:25:47.030 22:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.030 22:17:29 -- target/dif.sh@45 -- # for sub in "$@" 00:25:47.030 22:17:29 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:47.030 22:17:29 -- target/dif.sh@36 -- # local sub_id=1 00:25:47.030 22:17:29 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:47.030 22:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.030 22:17:29 -- common/autotest_common.sh@10 -- # set +x 00:25:47.030 22:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.030 22:17:29 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:47.030 22:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.030 22:17:29 -- common/autotest_common.sh@10 -- # set +x 00:25:47.030 22:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.030 00:25:47.030 real 0m24.684s 00:25:47.030 user 4m31.988s 00:25:47.030 sys 0m7.453s 00:25:47.030 22:17:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:47.030 22:17:29 -- common/autotest_common.sh@10 -- # set +x 00:25:47.030 ************************************ 00:25:47.030 END TEST fio_dif_rand_params 00:25:47.030 ************************************ 00:25:47.030 22:17:29 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:25:47.030 22:17:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:47.030 22:17:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:47.030 22:17:29 -- common/autotest_common.sh@10 -- # set +x 00:25:47.030 ************************************ 00:25:47.030 START TEST fio_dif_digest 00:25:47.030 ************************************ 00:25:47.030 22:17:29 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:25:47.030 22:17:29 -- target/dif.sh@123 -- # local NULL_DIF 00:25:47.030 22:17:29 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:25:47.030 22:17:29 -- target/dif.sh@125 -- # local hdgst ddgst 00:25:47.030 22:17:29 -- target/dif.sh@127 -- # NULL_DIF=3 00:25:47.030 22:17:29 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:25:47.030 22:17:29 -- target/dif.sh@127 -- # numjobs=3 00:25:47.030 22:17:29 -- target/dif.sh@127 -- # iodepth=3 00:25:47.030 22:17:29 -- target/dif.sh@127 -- # runtime=10 00:25:47.030 22:17:29 -- target/dif.sh@128 -- # hdgst=true 00:25:47.030 22:17:29 -- target/dif.sh@128 -- # ddgst=true 00:25:47.030 22:17:29 -- target/dif.sh@130 -- # create_subsystems 0 00:25:47.030 22:17:29 -- target/dif.sh@28 -- # local sub 00:25:47.030 22:17:29 -- target/dif.sh@30 -- # for sub in "$@" 00:25:47.030 22:17:29 -- target/dif.sh@31 -- # create_subsystem 0 00:25:47.030 22:17:29 -- target/dif.sh@18 -- # local sub_id=0 00:25:47.030 22:17:29 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:47.030 22:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.030 22:17:29 -- common/autotest_common.sh@10 -- # set +x 00:25:47.030 bdev_null0 00:25:47.030 22:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.030 22:17:29 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:47.030 22:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.030 22:17:29 -- common/autotest_common.sh@10 -- # set +x 00:25:47.030 22:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.030 22:17:29 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:47.030 22:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.030 22:17:29 -- common/autotest_common.sh@10 -- # set +x 00:25:47.030 22:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.030 22:17:29 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:47.030 22:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.030 22:17:29 -- common/autotest_common.sh@10 -- # set +x 00:25:47.030 [2024-04-24 22:17:29.260645] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:47.030 22:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.030 22:17:29 -- target/dif.sh@131 -- # fio /dev/fd/62 00:25:47.030 22:17:29 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:25:47.030 22:17:29 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:47.030 22:17:29 -- nvmf/common.sh@521 -- # config=() 00:25:47.030 22:17:29 -- nvmf/common.sh@521 -- # local subsystem config 00:25:47.030 22:17:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:47.030 22:17:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:47.030 { 00:25:47.030 "params": { 00:25:47.030 "name": "Nvme$subsystem", 00:25:47.030 "trtype": "$TEST_TRANSPORT", 00:25:47.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.030 "adrfam": "ipv4", 00:25:47.030 "trsvcid": "$NVMF_PORT", 00:25:47.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.030 "hdgst": ${hdgst:-false}, 00:25:47.030 "ddgst": ${ddgst:-false} 00:25:47.030 }, 00:25:47.030 "method": "bdev_nvme_attach_controller" 00:25:47.030 } 00:25:47.030 EOF 00:25:47.030 )") 00:25:47.030 22:17:29 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:47.030 22:17:29 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:47.030 22:17:29 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:25:47.030 22:17:29 -- target/dif.sh@82 -- # gen_fio_conf 00:25:47.030 22:17:29 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:47.030 22:17:29 -- common/autotest_common.sh@1325 -- # local sanitizers 00:25:47.030 22:17:29 -- target/dif.sh@54 -- # local file 00:25:47.030 22:17:29 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:47.030 22:17:29 -- target/dif.sh@56 -- # cat 00:25:47.030 22:17:29 -- common/autotest_common.sh@1327 -- # shift 00:25:47.030 22:17:29 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:25:47.030 22:17:29 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:47.030 22:17:29 -- nvmf/common.sh@543 -- # cat 00:25:47.030 22:17:29 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:47.030 22:17:29 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:47.030 22:17:29 -- common/autotest_common.sh@1331 -- # grep libasan 00:25:47.030 22:17:29 -- target/dif.sh@72 -- # (( file <= files )) 00:25:47.030 22:17:29 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:47.030 22:17:29 -- nvmf/common.sh@545 -- # jq . 00:25:47.030 22:17:29 -- nvmf/common.sh@546 -- # IFS=, 00:25:47.030 22:17:29 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:47.030 "params": { 00:25:47.030 "name": "Nvme0", 00:25:47.030 "trtype": "tcp", 00:25:47.030 "traddr": "10.0.0.2", 00:25:47.030 "adrfam": "ipv4", 00:25:47.030 "trsvcid": "4420", 00:25:47.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:47.030 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:47.030 "hdgst": true, 00:25:47.030 "ddgst": true 00:25:47.030 }, 00:25:47.030 "method": "bdev_nvme_attach_controller" 00:25:47.030 }' 00:25:47.288 22:17:29 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:47.288 22:17:29 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:47.288 22:17:29 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:47.288 22:17:29 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:47.288 22:17:29 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:25:47.288 22:17:29 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:47.288 22:17:29 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:47.288 22:17:29 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:47.288 22:17:29 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:47.288 22:17:29 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:47.288 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:47.288 ... 00:25:47.288 fio-3.35 00:25:47.288 Starting 3 threads 00:25:47.588 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.786 00:25:59.786 filename0: (groupid=0, jobs=1): err= 0: pid=4055064: Wed Apr 24 22:17:40 2024 00:25:59.786 read: IOPS=183, BW=22.9MiB/s (24.0MB/s)(230MiB/10045msec) 00:25:59.786 slat (nsec): min=5367, max=39460, avg=15815.97, stdev=3424.59 00:25:59.786 clat (usec): min=8995, max=59769, avg=16308.63, stdev=2535.19 00:25:59.786 lat (usec): min=9009, max=59784, avg=16324.45, stdev=2535.18 00:25:59.786 clat percentiles (usec): 00:25:59.786 | 1.00th=[10814], 5.00th=[14091], 10.00th=[14746], 20.00th=[15401], 00:25:59.786 | 30.00th=[15664], 40.00th=[15926], 50.00th=[16188], 60.00th=[16581], 00:25:59.786 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17957], 95.00th=[18220], 00:25:59.786 | 99.00th=[19268], 99.50th=[20317], 99.90th=[58459], 99.95th=[60031], 00:25:59.786 | 99.99th=[60031] 00:25:59.786 bw ( KiB/s): min=22272, max=24576, per=32.78%, avg=23567.15, stdev=515.24, samples=20 00:25:59.786 iops : min= 174, max= 192, avg=184.10, stdev= 4.02, samples=20 00:25:59.786 lat (msec) : 10=0.27%, 20=99.19%, 50=0.33%, 100=0.22% 00:25:59.786 cpu : usr=94.11%, sys=5.21%, ctx=36, majf=0, minf=123 00:25:59.786 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:59.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.786 issued rwts: total=1843,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.786 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:59.786 filename0: (groupid=0, jobs=1): err= 0: pid=4055065: Wed Apr 24 22:17:40 2024 00:25:59.786 read: IOPS=188, BW=23.6MiB/s (24.7MB/s)(237MiB/10047msec) 00:25:59.786 slat (nsec): min=6110, max=38840, avg=15398.69, stdev=3163.41 00:25:59.786 clat (usec): min=9592, max=58363, avg=15855.16, stdev=2396.41 00:25:59.786 lat (usec): min=9605, max=58379, avg=15870.56, stdev=2396.44 00:25:59.786 clat percentiles (usec): 00:25:59.786 | 1.00th=[10814], 5.00th=[13698], 10.00th=[14353], 20.00th=[14877], 00:25:59.786 | 30.00th=[15270], 40.00th=[15533], 50.00th=[15795], 60.00th=[16057], 00:25:59.786 | 70.00th=[16319], 80.00th=[16712], 90.00th=[17433], 95.00th=[17957], 00:25:59.786 | 99.00th=[19006], 99.50th=[19268], 99.90th=[57410], 99.95th=[58459], 00:25:59.786 | 99.99th=[58459] 00:25:59.786 bw ( KiB/s): min=21760, max=25600, per=33.72%, avg=24243.20, stdev=910.23, samples=20 00:25:59.786 iops : min= 170, max= 200, avg=189.40, stdev= 7.11, samples=20 00:25:59.786 lat (msec) : 10=0.16%, 20=99.58%, 50=0.11%, 100=0.16% 00:25:59.786 cpu : usr=94.15%, sys=5.39%, ctx=23, majf=0, minf=140 00:25:59.786 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:59.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.786 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.787 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:59.787 filename0: (groupid=0, jobs=1): err= 0: pid=4055066: Wed Apr 24 22:17:40 2024 00:25:59.787 read: IOPS=189, BW=23.7MiB/s (24.8MB/s)(238MiB/10045msec) 00:25:59.787 slat (nsec): min=5243, max=98806, avg=17025.96, stdev=5112.07 00:25:59.787 clat (usec): min=9339, max=58730, avg=15783.81, stdev=3691.61 00:25:59.787 lat (usec): min=9352, max=58750, avg=15800.83, stdev=3691.84 00:25:59.787 clat percentiles (usec): 00:25:59.787 | 1.00th=[11469], 5.00th=[13698], 10.00th=[14091], 20.00th=[14615], 00:25:59.787 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15533], 60.00th=[15795], 00:25:59.787 | 70.00th=[16057], 80.00th=[16319], 90.00th=[16909], 95.00th=[17433], 00:25:59.787 | 99.00th=[19006], 99.50th=[56361], 99.90th=[57934], 99.95th=[58983], 00:25:59.787 | 99.99th=[58983] 00:25:59.787 bw ( KiB/s): min=22016, max=25344, per=33.86%, avg=24345.60, stdev=1003.24, samples=20 00:25:59.787 iops : min= 172, max= 198, avg=190.20, stdev= 7.84, samples=20 00:25:59.787 lat (msec) : 10=0.32%, 20=98.90%, 50=0.05%, 100=0.74% 00:25:59.787 cpu : usr=93.07%, sys=6.31%, ctx=24, majf=0, minf=275 00:25:59.787 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:59.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.787 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.787 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.787 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:59.787 00:25:59.787 Run status group 0 (all jobs): 00:25:59.787 READ: bw=70.2MiB/s (73.6MB/s), 22.9MiB/s-23.7MiB/s (24.0MB/s-24.8MB/s), io=705MiB (740MB), run=10045-10047msec 00:25:59.787 22:17:40 -- target/dif.sh@132 -- # destroy_subsystems 0 00:25:59.787 22:17:40 -- target/dif.sh@43 -- # local sub 00:25:59.787 22:17:40 -- target/dif.sh@45 -- # for sub in "$@" 00:25:59.787 22:17:40 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:59.787 22:17:40 -- target/dif.sh@36 -- # local sub_id=0 00:25:59.787 22:17:40 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:59.787 22:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:59.787 22:17:40 -- common/autotest_common.sh@10 -- # set +x 00:25:59.787 22:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:59.787 22:17:40 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:59.787 22:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:59.787 22:17:40 -- common/autotest_common.sh@10 -- # set +x 00:25:59.787 22:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:59.787 00:25:59.787 real 0m11.322s 00:25:59.787 user 0m29.460s 00:25:59.787 sys 0m1.994s 00:25:59.787 22:17:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:59.787 22:17:40 -- common/autotest_common.sh@10 -- # set +x 00:25:59.787 ************************************ 00:25:59.787 END TEST fio_dif_digest 00:25:59.787 ************************************ 00:25:59.787 22:17:40 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:25:59.787 22:17:40 -- target/dif.sh@147 -- # nvmftestfini 00:25:59.787 22:17:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:59.787 22:17:40 -- nvmf/common.sh@117 -- # sync 00:25:59.787 22:17:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:59.787 22:17:40 -- nvmf/common.sh@120 -- # set +e 00:25:59.787 22:17:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:59.787 22:17:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:59.787 rmmod nvme_tcp 00:25:59.787 rmmod nvme_fabrics 00:25:59.787 rmmod nvme_keyring 00:25:59.787 22:17:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:59.787 22:17:40 -- nvmf/common.sh@124 -- # set -e 00:25:59.787 22:17:40 -- nvmf/common.sh@125 -- # return 0 00:25:59.787 22:17:40 -- nvmf/common.sh@478 -- # '[' -n 4048841 ']' 00:25:59.787 22:17:40 -- nvmf/common.sh@479 -- # killprocess 4048841 00:25:59.787 22:17:40 -- common/autotest_common.sh@936 -- # '[' -z 4048841 ']' 00:25:59.787 22:17:40 -- common/autotest_common.sh@940 -- # kill -0 4048841 00:25:59.787 22:17:40 -- common/autotest_common.sh@941 -- # uname 00:25:59.787 22:17:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:59.787 22:17:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4048841 00:25:59.787 22:17:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:59.787 22:17:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:59.787 22:17:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4048841' 00:25:59.787 killing process with pid 4048841 00:25:59.787 22:17:40 -- common/autotest_common.sh@955 -- # kill 4048841 00:25:59.787 [2024-04-24 22:17:40.665586] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:59.787 22:17:40 -- common/autotest_common.sh@960 -- # wait 4048841 00:25:59.787 22:17:40 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:25:59.787 22:17:40 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:00.046 Waiting for block devices as requested 00:26:00.046 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:26:00.306 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:00.306 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:00.306 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:00.306 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:00.564 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:00.564 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:00.564 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:00.564 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:00.822 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:00.822 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:00.822 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:00.822 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:01.079 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:01.079 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:01.079 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:01.079 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:01.338 22:17:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:01.338 22:17:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:01.338 22:17:43 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:01.338 22:17:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:01.338 22:17:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.338 22:17:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:01.338 22:17:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.240 22:17:45 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:03.240 00:26:03.240 real 1m8.274s 00:26:03.240 user 6m29.923s 00:26:03.240 sys 0m19.878s 00:26:03.240 22:17:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:03.240 22:17:45 -- common/autotest_common.sh@10 -- # set +x 00:26:03.240 ************************************ 00:26:03.240 END TEST nvmf_dif 00:26:03.240 ************************************ 00:26:03.240 22:17:45 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:03.240 22:17:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:03.240 22:17:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:03.240 22:17:45 -- common/autotest_common.sh@10 -- # set +x 00:26:03.499 ************************************ 00:26:03.499 START TEST nvmf_abort_qd_sizes 00:26:03.499 ************************************ 00:26:03.499 22:17:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:03.499 * Looking for test storage... 00:26:03.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:03.499 22:17:45 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:03.499 22:17:45 -- nvmf/common.sh@7 -- # uname -s 00:26:03.499 22:17:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:03.499 22:17:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:03.499 22:17:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:03.499 22:17:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:03.499 22:17:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:03.499 22:17:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:03.499 22:17:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:03.499 22:17:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:03.499 22:17:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:03.499 22:17:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:03.499 22:17:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:03.499 22:17:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:03.499 22:17:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:03.499 22:17:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:03.499 22:17:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:03.499 22:17:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:03.499 22:17:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:03.499 22:17:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:03.499 22:17:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:03.499 22:17:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:03.499 22:17:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.499 22:17:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.499 22:17:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.499 22:17:45 -- paths/export.sh@5 -- # export PATH 00:26:03.499 22:17:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.499 22:17:45 -- nvmf/common.sh@47 -- # : 0 00:26:03.499 22:17:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:03.499 22:17:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:03.499 22:17:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:03.499 22:17:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:03.499 22:17:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:03.499 22:17:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:03.499 22:17:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:03.499 22:17:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:03.499 22:17:45 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:26:03.499 22:17:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:03.499 22:17:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:03.499 22:17:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:03.499 22:17:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:03.499 22:17:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:03.499 22:17:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.499 22:17:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:03.499 22:17:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.499 22:17:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:03.499 22:17:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:03.499 22:17:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:03.499 22:17:45 -- common/autotest_common.sh@10 -- # set +x 00:26:06.034 22:17:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:06.034 22:17:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:06.034 22:17:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:06.034 22:17:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:06.034 22:17:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:06.034 22:17:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:06.034 22:17:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:06.034 22:17:48 -- nvmf/common.sh@295 -- # net_devs=() 00:26:06.034 22:17:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:06.034 22:17:48 -- nvmf/common.sh@296 -- # e810=() 00:26:06.034 22:17:48 -- nvmf/common.sh@296 -- # local -ga e810 00:26:06.034 22:17:48 -- nvmf/common.sh@297 -- # x722=() 00:26:06.034 22:17:48 -- nvmf/common.sh@297 -- # local -ga x722 00:26:06.034 22:17:48 -- nvmf/common.sh@298 -- # mlx=() 00:26:06.034 22:17:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:06.034 22:17:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:06.034 22:17:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:06.034 22:17:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:06.034 22:17:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:06.034 22:17:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:06.034 22:17:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:06.034 22:17:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:06.034 22:17:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:06.034 22:17:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:06.034 22:17:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:06.034 22:17:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:06.034 22:17:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:06.034 22:17:48 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:06.034 22:17:48 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:06.034 22:17:48 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:06.034 22:17:48 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:06.034 22:17:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:06.034 22:17:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:06.034 22:17:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:06.034 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:06.034 22:17:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:06.034 22:17:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:06.034 22:17:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.034 22:17:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.034 22:17:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:06.034 22:17:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:06.034 22:17:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:06.034 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:06.034 22:17:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:06.034 22:17:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:06.034 22:17:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.034 22:17:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.034 22:17:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:06.034 22:17:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:06.034 22:17:48 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:06.034 22:17:48 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:06.034 22:17:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:06.034 22:17:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.034 22:17:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:06.034 22:17:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.034 22:17:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:06.034 Found net devices under 0000:84:00.0: cvl_0_0 00:26:06.034 22:17:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.034 22:17:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:06.034 22:17:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.034 22:17:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:06.034 22:17:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.035 22:17:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:06.035 Found net devices under 0000:84:00.1: cvl_0_1 00:26:06.035 22:17:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.035 22:17:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:06.035 22:17:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:06.035 22:17:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:06.035 22:17:48 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:06.035 22:17:48 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:06.035 22:17:48 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:06.035 22:17:48 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:06.035 22:17:48 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:06.035 22:17:48 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:06.035 22:17:48 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:06.035 22:17:48 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:06.035 22:17:48 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:06.035 22:17:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:06.035 22:17:48 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:06.035 22:17:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:06.035 22:17:48 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:06.035 22:17:48 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:06.035 22:17:48 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:06.035 22:17:48 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:06.035 22:17:48 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:06.035 22:17:48 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:06.035 22:17:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:06.035 22:17:48 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:06.035 22:17:48 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:06.035 22:17:48 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:06.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:06.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:26:06.035 00:26:06.035 --- 10.0.0.2 ping statistics --- 00:26:06.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.035 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:26:06.035 22:17:48 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:06.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:06.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:26:06.035 00:26:06.035 --- 10.0.0.1 ping statistics --- 00:26:06.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.035 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:26:06.035 22:17:48 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:06.035 22:17:48 -- nvmf/common.sh@411 -- # return 0 00:26:06.035 22:17:48 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:26:06.035 22:17:48 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:07.411 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:07.411 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:07.411 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:07.411 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:07.411 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:07.411 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:07.411 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:07.411 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:07.411 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:07.411 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:07.411 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:07.411 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:07.411 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:07.411 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:07.411 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:07.411 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:08.346 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:26:08.346 22:17:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:08.346 22:17:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:08.346 22:17:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:08.346 22:17:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:08.346 22:17:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:08.346 22:17:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:08.604 22:17:50 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:26:08.604 22:17:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:08.604 22:17:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:08.604 22:17:50 -- common/autotest_common.sh@10 -- # set +x 00:26:08.604 22:17:50 -- nvmf/common.sh@470 -- # nvmfpid=4060012 00:26:08.604 22:17:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:08.604 22:17:50 -- nvmf/common.sh@471 -- # waitforlisten 4060012 00:26:08.604 22:17:50 -- common/autotest_common.sh@817 -- # '[' -z 4060012 ']' 00:26:08.604 22:17:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.604 22:17:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:08.604 22:17:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.604 22:17:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:08.604 22:17:50 -- common/autotest_common.sh@10 -- # set +x 00:26:08.604 [2024-04-24 22:17:50.663252] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:26:08.605 [2024-04-24 22:17:50.663350] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:08.605 EAL: No free 2048 kB hugepages reported on node 1 00:26:08.605 [2024-04-24 22:17:50.741680] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:08.863 [2024-04-24 22:17:50.861849] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:08.863 [2024-04-24 22:17:50.861919] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:08.863 [2024-04-24 22:17:50.861935] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:08.863 [2024-04-24 22:17:50.861949] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:08.863 [2024-04-24 22:17:50.861960] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:08.863 [2024-04-24 22:17:50.862045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.863 [2024-04-24 22:17:50.862355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:08.863 [2024-04-24 22:17:50.862415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:08.863 [2024-04-24 22:17:50.862420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.863 22:17:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:08.863 22:17:50 -- common/autotest_common.sh@850 -- # return 0 00:26:08.863 22:17:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:08.863 22:17:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:08.863 22:17:50 -- common/autotest_common.sh@10 -- # set +x 00:26:08.863 22:17:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:08.863 22:17:51 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:08.863 22:17:51 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:26:08.863 22:17:51 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:26:08.863 22:17:51 -- scripts/common.sh@309 -- # local bdf bdfs 00:26:08.863 22:17:51 -- scripts/common.sh@310 -- # local nvmes 00:26:08.863 22:17:51 -- scripts/common.sh@312 -- # [[ -n 0000:82:00.0 ]] 00:26:08.863 22:17:51 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:26:08.863 22:17:51 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:26:08.863 22:17:51 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:82:00.0 ]] 00:26:08.863 22:17:51 -- scripts/common.sh@320 -- # uname -s 00:26:08.863 22:17:51 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:26:08.863 22:17:51 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:26:08.863 22:17:51 -- scripts/common.sh@325 -- # (( 1 )) 00:26:08.863 22:17:51 -- scripts/common.sh@326 -- # printf '%s\n' 0000:82:00.0 00:26:08.863 22:17:51 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:26:08.863 22:17:51 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:82:00.0 00:26:08.863 22:17:51 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:26:08.863 22:17:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:08.863 22:17:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:08.863 22:17:51 -- common/autotest_common.sh@10 -- # set +x 00:26:08.863 ************************************ 00:26:08.863 START TEST spdk_target_abort 00:26:08.863 ************************************ 00:26:08.863 22:17:51 -- common/autotest_common.sh@1111 -- # spdk_target 00:26:08.863 22:17:51 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:08.863 22:17:51 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:82:00.0 -b spdk_target 00:26:08.863 22:17:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:08.863 22:17:51 -- common/autotest_common.sh@10 -- # set +x 00:26:12.146 spdk_targetn1 00:26:12.146 22:17:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.146 22:17:53 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:12.146 22:17:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.146 22:17:53 -- common/autotest_common.sh@10 -- # set +x 00:26:12.146 [2024-04-24 22:17:53.963808] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:12.146 22:17:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.146 22:17:53 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:26:12.146 22:17:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.146 22:17:53 -- common/autotest_common.sh@10 -- # set +x 00:26:12.146 22:17:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.146 22:17:53 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:26:12.146 22:17:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.146 22:17:53 -- common/autotest_common.sh@10 -- # set +x 00:26:12.146 22:17:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.146 22:17:53 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:26:12.146 22:17:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.146 22:17:53 -- common/autotest_common.sh@10 -- # set +x 00:26:12.146 [2024-04-24 22:17:53.995820] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:12.146 [2024-04-24 22:17:53.996175] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:12.146 22:17:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.146 22:17:53 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:26:12.146 22:17:53 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:12.146 22:17:53 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:12.146 22:17:53 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:12.146 22:17:53 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:12.146 22:17:53 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:12.146 22:17:53 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:12.146 22:17:53 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:12.146 22:17:53 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:12.146 22:17:53 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:12.146 22:17:54 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:12.146 22:17:54 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:12.146 22:17:54 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:12.146 22:17:54 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:12.146 22:17:54 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:12.146 22:17:54 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:12.146 22:17:54 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:12.146 22:17:54 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:12.146 22:17:54 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:12.146 22:17:54 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:12.146 22:17:54 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:12.146 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.461 Initializing NVMe Controllers 00:26:15.461 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:15.461 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:15.461 Initialization complete. Launching workers. 00:26:15.462 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9515, failed: 0 00:26:15.462 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1242, failed to submit 8273 00:26:15.462 success 711, unsuccess 531, failed 0 00:26:15.462 22:17:57 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:15.462 22:17:57 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:15.462 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.774 Initializing NVMe Controllers 00:26:18.774 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:18.774 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:18.774 Initialization complete. Launching workers. 00:26:18.774 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8370, failed: 0 00:26:18.774 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1232, failed to submit 7138 00:26:18.774 success 284, unsuccess 948, failed 0 00:26:18.774 22:18:00 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:18.774 22:18:00 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:18.774 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.060 Initializing NVMe Controllers 00:26:22.060 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:22.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:22.060 Initialization complete. Launching workers. 00:26:22.060 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29759, failed: 0 00:26:22.060 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2744, failed to submit 27015 00:26:22.060 success 480, unsuccess 2264, failed 0 00:26:22.060 22:18:03 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:26:22.060 22:18:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:22.060 22:18:03 -- common/autotest_common.sh@10 -- # set +x 00:26:22.060 22:18:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:22.060 22:18:03 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:22.060 22:18:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:22.060 22:18:03 -- common/autotest_common.sh@10 -- # set +x 00:26:22.996 22:18:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:22.996 22:18:05 -- target/abort_qd_sizes.sh@61 -- # killprocess 4060012 00:26:22.996 22:18:05 -- common/autotest_common.sh@936 -- # '[' -z 4060012 ']' 00:26:22.996 22:18:05 -- common/autotest_common.sh@940 -- # kill -0 4060012 00:26:22.996 22:18:05 -- common/autotest_common.sh@941 -- # uname 00:26:22.996 22:18:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:22.996 22:18:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4060012 00:26:22.996 22:18:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:22.996 22:18:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:22.996 22:18:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4060012' 00:26:22.996 killing process with pid 4060012 00:26:22.996 22:18:05 -- common/autotest_common.sh@955 -- # kill 4060012 00:26:22.996 [2024-04-24 22:18:05.072858] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:22.996 22:18:05 -- common/autotest_common.sh@960 -- # wait 4060012 00:26:23.255 00:26:23.255 real 0m14.230s 00:26:23.255 user 0m54.017s 00:26:23.255 sys 0m2.779s 00:26:23.255 22:18:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:23.255 22:18:05 -- common/autotest_common.sh@10 -- # set +x 00:26:23.255 ************************************ 00:26:23.255 END TEST spdk_target_abort 00:26:23.255 ************************************ 00:26:23.255 22:18:05 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:26:23.255 22:18:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:23.255 22:18:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:23.255 22:18:05 -- common/autotest_common.sh@10 -- # set +x 00:26:23.255 ************************************ 00:26:23.255 START TEST kernel_target_abort 00:26:23.255 ************************************ 00:26:23.255 22:18:05 -- common/autotest_common.sh@1111 -- # kernel_target 00:26:23.255 22:18:05 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:26:23.255 22:18:05 -- nvmf/common.sh@717 -- # local ip 00:26:23.255 22:18:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:23.255 22:18:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:23.255 22:18:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.255 22:18:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.255 22:18:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:23.255 22:18:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.255 22:18:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:23.255 22:18:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:23.255 22:18:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:23.255 22:18:05 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:23.255 22:18:05 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:23.255 22:18:05 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:26:23.255 22:18:05 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:23.255 22:18:05 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:23.255 22:18:05 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:23.255 22:18:05 -- nvmf/common.sh@628 -- # local block nvme 00:26:23.255 22:18:05 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:26:23.255 22:18:05 -- nvmf/common.sh@631 -- # modprobe nvmet 00:26:23.255 22:18:05 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:23.255 22:18:05 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:24.631 Waiting for block devices as requested 00:26:24.631 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:26:24.631 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:24.889 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:24.889 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:24.889 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:24.889 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:25.148 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:25.148 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:25.148 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:25.148 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:25.407 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:25.407 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:25.407 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:25.407 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:25.666 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:25.666 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:25.666 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:25.666 22:18:07 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:26:25.666 22:18:07 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:25.666 22:18:07 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:26:25.666 22:18:07 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:25.666 22:18:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:25.666 22:18:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:25.666 22:18:07 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:26:25.666 22:18:07 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:25.666 22:18:07 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:25.925 No valid GPT data, bailing 00:26:25.925 22:18:07 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:25.925 22:18:07 -- scripts/common.sh@391 -- # pt= 00:26:25.925 22:18:07 -- scripts/common.sh@392 -- # return 1 00:26:25.925 22:18:08 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:26:25.925 22:18:08 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:26:25.925 22:18:08 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:25.925 22:18:08 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:25.925 22:18:08 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:25.925 22:18:08 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:25.925 22:18:08 -- nvmf/common.sh@656 -- # echo 1 00:26:25.925 22:18:08 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:26:25.925 22:18:08 -- nvmf/common.sh@658 -- # echo 1 00:26:25.925 22:18:08 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:26:25.925 22:18:08 -- nvmf/common.sh@661 -- # echo tcp 00:26:25.925 22:18:08 -- nvmf/common.sh@662 -- # echo 4420 00:26:25.925 22:18:08 -- nvmf/common.sh@663 -- # echo ipv4 00:26:25.925 22:18:08 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:25.925 22:18:08 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:26:25.925 00:26:25.925 Discovery Log Number of Records 2, Generation counter 2 00:26:25.925 =====Discovery Log Entry 0====== 00:26:25.925 trtype: tcp 00:26:25.925 adrfam: ipv4 00:26:25.925 subtype: current discovery subsystem 00:26:25.925 treq: not specified, sq flow control disable supported 00:26:25.925 portid: 1 00:26:25.925 trsvcid: 4420 00:26:25.925 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:25.925 traddr: 10.0.0.1 00:26:25.925 eflags: none 00:26:25.925 sectype: none 00:26:25.925 =====Discovery Log Entry 1====== 00:26:25.925 trtype: tcp 00:26:25.925 adrfam: ipv4 00:26:25.925 subtype: nvme subsystem 00:26:25.925 treq: not specified, sq flow control disable supported 00:26:25.925 portid: 1 00:26:25.925 trsvcid: 4420 00:26:25.925 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:25.925 traddr: 10.0.0.1 00:26:25.925 eflags: none 00:26:25.925 sectype: none 00:26:25.925 22:18:08 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:26:25.925 22:18:08 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:25.925 22:18:08 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:25.925 22:18:08 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:25.925 22:18:08 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:25.925 22:18:08 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:25.925 22:18:08 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:25.925 22:18:08 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:25.925 22:18:08 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:25.925 22:18:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:25.925 22:18:08 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:25.925 22:18:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:25.925 22:18:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:25.925 22:18:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:25.925 22:18:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:25.925 22:18:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:25.925 22:18:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:25.925 22:18:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:25.925 22:18:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:25.925 22:18:08 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:25.925 22:18:08 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:25.925 EAL: No free 2048 kB hugepages reported on node 1 00:26:29.211 Initializing NVMe Controllers 00:26:29.211 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:29.211 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:29.211 Initialization complete. Launching workers. 00:26:29.211 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34788, failed: 0 00:26:29.211 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34788, failed to submit 0 00:26:29.211 success 0, unsuccess 34788, failed 0 00:26:29.211 22:18:11 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:29.211 22:18:11 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:29.211 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.503 Initializing NVMe Controllers 00:26:32.503 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:32.503 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:32.503 Initialization complete. Launching workers. 00:26:32.503 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67778, failed: 0 00:26:32.503 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17090, failed to submit 50688 00:26:32.503 success 0, unsuccess 17090, failed 0 00:26:32.503 22:18:14 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:32.504 22:18:14 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:32.504 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.792 Initializing NVMe Controllers 00:26:35.792 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:35.792 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:35.792 Initialization complete. Launching workers. 00:26:35.792 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66015, failed: 0 00:26:35.792 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16462, failed to submit 49553 00:26:35.792 success 0, unsuccess 16462, failed 0 00:26:35.792 22:18:17 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:26:35.792 22:18:17 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:35.792 22:18:17 -- nvmf/common.sh@675 -- # echo 0 00:26:35.792 22:18:17 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:35.792 22:18:17 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:35.792 22:18:17 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:35.792 22:18:17 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:35.792 22:18:17 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:26:35.792 22:18:17 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:26:35.792 22:18:17 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:36.359 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:36.359 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:36.359 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:36.359 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:36.359 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:36.359 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:36.359 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:36.359 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:36.359 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:36.359 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:36.359 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:36.359 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:36.359 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:36.359 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:36.359 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:36.359 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:37.735 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:26:37.735 00:26:37.735 real 0m14.256s 00:26:37.735 user 0m5.444s 00:26:37.735 sys 0m3.567s 00:26:37.735 22:18:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:37.735 22:18:19 -- common/autotest_common.sh@10 -- # set +x 00:26:37.735 ************************************ 00:26:37.735 END TEST kernel_target_abort 00:26:37.735 ************************************ 00:26:37.735 22:18:19 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:37.735 22:18:19 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:26:37.735 22:18:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:37.735 22:18:19 -- nvmf/common.sh@117 -- # sync 00:26:37.735 22:18:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:37.735 22:18:19 -- nvmf/common.sh@120 -- # set +e 00:26:37.735 22:18:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:37.735 22:18:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:37.735 rmmod nvme_tcp 00:26:37.735 rmmod nvme_fabrics 00:26:37.735 rmmod nvme_keyring 00:26:37.735 22:18:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:37.735 22:18:19 -- nvmf/common.sh@124 -- # set -e 00:26:37.735 22:18:19 -- nvmf/common.sh@125 -- # return 0 00:26:37.735 22:18:19 -- nvmf/common.sh@478 -- # '[' -n 4060012 ']' 00:26:37.735 22:18:19 -- nvmf/common.sh@479 -- # killprocess 4060012 00:26:37.735 22:18:19 -- common/autotest_common.sh@936 -- # '[' -z 4060012 ']' 00:26:37.735 22:18:19 -- common/autotest_common.sh@940 -- # kill -0 4060012 00:26:37.735 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (4060012) - No such process 00:26:37.735 22:18:19 -- common/autotest_common.sh@963 -- # echo 'Process with pid 4060012 is not found' 00:26:37.735 Process with pid 4060012 is not found 00:26:37.735 22:18:19 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:26:37.735 22:18:19 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:39.114 Waiting for block devices as requested 00:26:39.114 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:26:39.114 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:39.114 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:39.114 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:39.114 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:39.114 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:39.412 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:39.412 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:39.412 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:39.412 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:39.672 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:39.672 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:39.672 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:39.672 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:39.931 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:39.931 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:39.931 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:39.931 22:18:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:39.931 22:18:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:39.931 22:18:22 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:39.931 22:18:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:39.931 22:18:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.931 22:18:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:39.931 22:18:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.468 22:18:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:42.468 00:26:42.468 real 0m38.621s 00:26:42.468 user 1m1.760s 00:26:42.468 sys 0m10.310s 00:26:42.468 22:18:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:42.468 22:18:24 -- common/autotest_common.sh@10 -- # set +x 00:26:42.468 ************************************ 00:26:42.468 END TEST nvmf_abort_qd_sizes 00:26:42.468 ************************************ 00:26:42.468 22:18:24 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:26:42.468 22:18:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:42.468 22:18:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:42.468 22:18:24 -- common/autotest_common.sh@10 -- # set +x 00:26:42.468 ************************************ 00:26:42.468 START TEST keyring_file 00:26:42.468 ************************************ 00:26:42.468 22:18:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:26:42.468 * Looking for test storage... 00:26:42.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:26:42.468 22:18:24 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:26:42.468 22:18:24 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:42.468 22:18:24 -- nvmf/common.sh@7 -- # uname -s 00:26:42.468 22:18:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:42.468 22:18:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:42.468 22:18:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:42.468 22:18:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:42.468 22:18:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:42.468 22:18:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:42.468 22:18:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:42.468 22:18:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:42.468 22:18:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:42.468 22:18:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:42.468 22:18:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:42.468 22:18:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:42.468 22:18:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:42.468 22:18:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:42.468 22:18:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:42.468 22:18:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:42.468 22:18:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:42.468 22:18:24 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:42.468 22:18:24 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:42.468 22:18:24 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:42.468 22:18:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.468 22:18:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.468 22:18:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.468 22:18:24 -- paths/export.sh@5 -- # export PATH 00:26:42.468 22:18:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.468 22:18:24 -- nvmf/common.sh@47 -- # : 0 00:26:42.468 22:18:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:42.468 22:18:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:42.468 22:18:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:42.468 22:18:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:42.468 22:18:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:42.468 22:18:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:42.468 22:18:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:42.468 22:18:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:42.468 22:18:24 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:26:42.468 22:18:24 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:26:42.468 22:18:24 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:26:42.468 22:18:24 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:26:42.468 22:18:24 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:26:42.468 22:18:24 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:26:42.468 22:18:24 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:26:42.468 22:18:24 -- keyring/common.sh@15 -- # local name key digest path 00:26:42.468 22:18:24 -- keyring/common.sh@17 -- # name=key0 00:26:42.468 22:18:24 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:42.468 22:18:24 -- keyring/common.sh@17 -- # digest=0 00:26:42.468 22:18:24 -- keyring/common.sh@18 -- # mktemp 00:26:42.468 22:18:24 -- keyring/common.sh@18 -- # path=/tmp/tmp.HcJzYLX7Vb 00:26:42.468 22:18:24 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:42.468 22:18:24 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:42.468 22:18:24 -- nvmf/common.sh@691 -- # local prefix key digest 00:26:42.468 22:18:24 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:26:42.468 22:18:24 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:26:42.468 22:18:24 -- nvmf/common.sh@693 -- # digest=0 00:26:42.468 22:18:24 -- nvmf/common.sh@694 -- # python - 00:26:42.468 22:18:24 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.HcJzYLX7Vb 00:26:42.468 22:18:24 -- keyring/common.sh@23 -- # echo /tmp/tmp.HcJzYLX7Vb 00:26:42.468 22:18:24 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.HcJzYLX7Vb 00:26:42.468 22:18:24 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:26:42.468 22:18:24 -- keyring/common.sh@15 -- # local name key digest path 00:26:42.468 22:18:24 -- keyring/common.sh@17 -- # name=key1 00:26:42.468 22:18:24 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:26:42.468 22:18:24 -- keyring/common.sh@17 -- # digest=0 00:26:42.468 22:18:24 -- keyring/common.sh@18 -- # mktemp 00:26:42.468 22:18:24 -- keyring/common.sh@18 -- # path=/tmp/tmp.9KWfpadwsk 00:26:42.468 22:18:24 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:26:42.468 22:18:24 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:26:42.468 22:18:24 -- nvmf/common.sh@691 -- # local prefix key digest 00:26:42.468 22:18:24 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:26:42.468 22:18:24 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:26:42.468 22:18:24 -- nvmf/common.sh@693 -- # digest=0 00:26:42.468 22:18:24 -- nvmf/common.sh@694 -- # python - 00:26:42.468 22:18:24 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.9KWfpadwsk 00:26:42.468 22:18:24 -- keyring/common.sh@23 -- # echo /tmp/tmp.9KWfpadwsk 00:26:42.468 22:18:24 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.9KWfpadwsk 00:26:42.468 22:18:24 -- keyring/file.sh@30 -- # tgtpid=4066428 00:26:42.468 22:18:24 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:26:42.468 22:18:24 -- keyring/file.sh@32 -- # waitforlisten 4066428 00:26:42.468 22:18:24 -- common/autotest_common.sh@817 -- # '[' -z 4066428 ']' 00:26:42.468 22:18:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.468 22:18:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:42.468 22:18:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:42.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:42.468 22:18:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:42.468 22:18:24 -- common/autotest_common.sh@10 -- # set +x 00:26:42.468 [2024-04-24 22:18:24.555900] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:26:42.468 [2024-04-24 22:18:24.555995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4066428 ] 00:26:42.468 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.468 [2024-04-24 22:18:24.631671] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.729 [2024-04-24 22:18:24.749825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.989 22:18:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:42.989 22:18:25 -- common/autotest_common.sh@850 -- # return 0 00:26:42.989 22:18:25 -- keyring/file.sh@33 -- # rpc_cmd 00:26:42.989 22:18:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.989 22:18:25 -- common/autotest_common.sh@10 -- # set +x 00:26:42.989 [2024-04-24 22:18:25.028742] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:42.989 null0 00:26:42.989 [2024-04-24 22:18:25.060756] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:42.989 [2024-04-24 22:18:25.060832] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:42.989 [2024-04-24 22:18:25.061351] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:42.989 [2024-04-24 22:18:25.068805] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:42.989 22:18:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.989 22:18:25 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:42.989 22:18:25 -- common/autotest_common.sh@638 -- # local es=0 00:26:42.989 22:18:25 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:42.989 22:18:25 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:42.989 22:18:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:42.989 22:18:25 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:42.989 22:18:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:42.989 22:18:25 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:42.989 22:18:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.989 22:18:25 -- common/autotest_common.sh@10 -- # set +x 00:26:42.989 [2024-04-24 22:18:25.080830] nvmf_rpc.c: 779:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:26:42.989 request: 00:26:42.989 { 00:26:42.989 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:26:42.989 "secure_channel": false, 00:26:42.989 "listen_address": { 00:26:42.989 "trtype": "tcp", 00:26:42.989 "traddr": "127.0.0.1", 00:26:42.989 "trsvcid": "4420" 00:26:42.989 }, 00:26:42.989 "method": "nvmf_subsystem_add_listener", 00:26:42.989 "req_id": 1 00:26:42.989 } 00:26:42.989 Got JSON-RPC error response 00:26:42.989 response: 00:26:42.989 { 00:26:42.989 "code": -32602, 00:26:42.989 "message": "Invalid parameters" 00:26:42.989 } 00:26:42.989 22:18:25 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:42.989 22:18:25 -- common/autotest_common.sh@641 -- # es=1 00:26:42.989 22:18:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:42.989 22:18:25 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:42.989 22:18:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:42.989 22:18:25 -- keyring/file.sh@46 -- # bperfpid=4066438 00:26:42.989 22:18:25 -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:26:42.989 22:18:25 -- keyring/file.sh@48 -- # waitforlisten 4066438 /var/tmp/bperf.sock 00:26:42.989 22:18:25 -- common/autotest_common.sh@817 -- # '[' -z 4066438 ']' 00:26:42.989 22:18:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:42.989 22:18:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:42.989 22:18:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:42.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:42.989 22:18:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:42.989 22:18:25 -- common/autotest_common.sh@10 -- # set +x 00:26:42.989 [2024-04-24 22:18:25.130365] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:26:42.989 [2024-04-24 22:18:25.130448] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4066438 ] 00:26:42.989 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.989 [2024-04-24 22:18:25.196214] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.247 [2024-04-24 22:18:25.315604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.247 22:18:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:43.247 22:18:25 -- common/autotest_common.sh@850 -- # return 0 00:26:43.247 22:18:25 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HcJzYLX7Vb 00:26:43.247 22:18:25 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HcJzYLX7Vb 00:26:43.506 22:18:25 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.9KWfpadwsk 00:26:43.506 22:18:25 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.9KWfpadwsk 00:26:44.072 22:18:26 -- keyring/file.sh@51 -- # get_key key0 00:26:44.072 22:18:26 -- keyring/file.sh@51 -- # jq -r .path 00:26:44.072 22:18:26 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:44.072 22:18:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:44.072 22:18:26 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:44.330 22:18:26 -- keyring/file.sh@51 -- # [[ /tmp/tmp.HcJzYLX7Vb == \/\t\m\p\/\t\m\p\.\H\c\J\z\Y\L\X\7\V\b ]] 00:26:44.330 22:18:26 -- keyring/file.sh@52 -- # get_key key1 00:26:44.330 22:18:26 -- keyring/file.sh@52 -- # jq -r .path 00:26:44.330 22:18:26 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:44.330 22:18:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:44.330 22:18:26 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:44.587 22:18:26 -- keyring/file.sh@52 -- # [[ /tmp/tmp.9KWfpadwsk == \/\t\m\p\/\t\m\p\.\9\K\W\f\p\a\d\w\s\k ]] 00:26:44.587 22:18:26 -- keyring/file.sh@53 -- # get_refcnt key0 00:26:44.587 22:18:26 -- keyring/common.sh@12 -- # get_key key0 00:26:44.587 22:18:26 -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:44.587 22:18:26 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:44.587 22:18:26 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:44.587 22:18:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:44.845 22:18:27 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:26:44.845 22:18:27 -- keyring/file.sh@54 -- # get_refcnt key1 00:26:44.845 22:18:27 -- keyring/common.sh@12 -- # get_key key1 00:26:44.845 22:18:27 -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:44.845 22:18:27 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:44.845 22:18:27 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:44.845 22:18:27 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:45.103 22:18:27 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:26:45.103 22:18:27 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:45.103 22:18:27 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:45.361 [2024-04-24 22:18:27.607069] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:45.689 nvme0n1 00:26:45.689 22:18:27 -- keyring/file.sh@59 -- # get_refcnt key0 00:26:45.689 22:18:27 -- keyring/common.sh@12 -- # get_key key0 00:26:45.689 22:18:27 -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:45.689 22:18:27 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:45.689 22:18:27 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:45.689 22:18:27 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:45.948 22:18:27 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:26:45.948 22:18:27 -- keyring/file.sh@60 -- # get_refcnt key1 00:26:45.948 22:18:27 -- keyring/common.sh@12 -- # get_key key1 00:26:45.948 22:18:27 -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:45.948 22:18:27 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:45.948 22:18:27 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:45.948 22:18:27 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:46.207 22:18:28 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:26:46.207 22:18:28 -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:46.207 Running I/O for 1 seconds... 00:26:47.588 00:26:47.588 Latency(us) 00:26:47.588 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:47.588 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:26:47.588 nvme0n1 : 1.02 5862.47 22.90 0.00 0.00 21590.62 4781.70 30098.01 00:26:47.588 =================================================================================================================== 00:26:47.588 Total : 5862.47 22.90 0.00 0.00 21590.62 4781.70 30098.01 00:26:47.588 0 00:26:47.588 22:18:29 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:26:47.588 22:18:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:26:47.588 22:18:29 -- keyring/file.sh@65 -- # get_refcnt key0 00:26:47.588 22:18:29 -- keyring/common.sh@12 -- # get_key key0 00:26:47.588 22:18:29 -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:47.588 22:18:29 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:47.588 22:18:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:47.588 22:18:29 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:47.847 22:18:30 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:26:47.847 22:18:30 -- keyring/file.sh@66 -- # get_refcnt key1 00:26:47.847 22:18:30 -- keyring/common.sh@12 -- # get_key key1 00:26:47.847 22:18:30 -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:47.847 22:18:30 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:47.847 22:18:30 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:47.847 22:18:30 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:48.417 22:18:30 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:26:48.417 22:18:30 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:48.417 22:18:30 -- common/autotest_common.sh@638 -- # local es=0 00:26:48.417 22:18:30 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:48.417 22:18:30 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:26:48.417 22:18:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:48.417 22:18:30 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:26:48.417 22:18:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:48.417 22:18:30 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:48.417 22:18:30 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:48.677 [2024-04-24 22:18:30.807629] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:26:48.677 [2024-04-24 22:18:30.807934] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c69d0 (107): Transport endpoint is not connected 00:26:48.677 [2024-04-24 22:18:30.808925] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c69d0 (9): Bad file descriptor 00:26:48.677 [2024-04-24 22:18:30.809924] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:48.677 [2024-04-24 22:18:30.809948] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:26:48.677 [2024-04-24 22:18:30.809964] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:48.677 request: 00:26:48.677 { 00:26:48.677 "name": "nvme0", 00:26:48.677 "trtype": "tcp", 00:26:48.677 "traddr": "127.0.0.1", 00:26:48.677 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:48.677 "adrfam": "ipv4", 00:26:48.677 "trsvcid": "4420", 00:26:48.677 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:48.677 "psk": "key1", 00:26:48.677 "method": "bdev_nvme_attach_controller", 00:26:48.677 "req_id": 1 00:26:48.677 } 00:26:48.677 Got JSON-RPC error response 00:26:48.677 response: 00:26:48.677 { 00:26:48.677 "code": -32602, 00:26:48.677 "message": "Invalid parameters" 00:26:48.677 } 00:26:48.677 22:18:30 -- common/autotest_common.sh@641 -- # es=1 00:26:48.677 22:18:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:48.677 22:18:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:48.677 22:18:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:48.677 22:18:30 -- keyring/file.sh@71 -- # get_refcnt key0 00:26:48.677 22:18:30 -- keyring/common.sh@12 -- # get_key key0 00:26:48.677 22:18:30 -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:48.677 22:18:30 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:48.677 22:18:30 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:48.677 22:18:30 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:48.937 22:18:31 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:26:48.937 22:18:31 -- keyring/file.sh@72 -- # get_refcnt key1 00:26:48.937 22:18:31 -- keyring/common.sh@12 -- # get_key key1 00:26:48.937 22:18:31 -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:48.937 22:18:31 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:48.937 22:18:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:48.937 22:18:31 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:49.504 22:18:31 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:26:49.504 22:18:31 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:26:49.504 22:18:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:49.504 22:18:31 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:26:49.504 22:18:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:26:50.071 22:18:32 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:26:50.071 22:18:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:50.071 22:18:32 -- keyring/file.sh@77 -- # jq length 00:26:50.330 22:18:32 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:26:50.330 22:18:32 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.HcJzYLX7Vb 00:26:50.330 22:18:32 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.HcJzYLX7Vb 00:26:50.330 22:18:32 -- common/autotest_common.sh@638 -- # local es=0 00:26:50.330 22:18:32 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.HcJzYLX7Vb 00:26:50.330 22:18:32 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:26:50.330 22:18:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:50.330 22:18:32 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:26:50.330 22:18:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:50.330 22:18:32 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HcJzYLX7Vb 00:26:50.330 22:18:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HcJzYLX7Vb 00:26:50.589 [2024-04-24 22:18:32.769957] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.HcJzYLX7Vb': 0100660 00:26:50.589 [2024-04-24 22:18:32.769997] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:26:50.589 request: 00:26:50.589 { 00:26:50.589 "name": "key0", 00:26:50.589 "path": "/tmp/tmp.HcJzYLX7Vb", 00:26:50.589 "method": "keyring_file_add_key", 00:26:50.589 "req_id": 1 00:26:50.589 } 00:26:50.589 Got JSON-RPC error response 00:26:50.589 response: 00:26:50.589 { 00:26:50.589 "code": -1, 00:26:50.589 "message": "Operation not permitted" 00:26:50.589 } 00:26:50.589 22:18:32 -- common/autotest_common.sh@641 -- # es=1 00:26:50.589 22:18:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:50.589 22:18:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:50.589 22:18:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:50.589 22:18:32 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.HcJzYLX7Vb 00:26:50.589 22:18:32 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HcJzYLX7Vb 00:26:50.589 22:18:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HcJzYLX7Vb 00:26:51.156 22:18:33 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.HcJzYLX7Vb 00:26:51.156 22:18:33 -- keyring/file.sh@88 -- # get_refcnt key0 00:26:51.156 22:18:33 -- keyring/common.sh@12 -- # get_key key0 00:26:51.156 22:18:33 -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:51.156 22:18:33 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:51.156 22:18:33 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:51.156 22:18:33 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:51.415 22:18:33 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:26:51.415 22:18:33 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:51.415 22:18:33 -- common/autotest_common.sh@638 -- # local es=0 00:26:51.415 22:18:33 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:51.415 22:18:33 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:26:51.415 22:18:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:51.415 22:18:33 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:26:51.415 22:18:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:51.415 22:18:33 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:51.415 22:18:33 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:51.674 [2024-04-24 22:18:33.844796] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.HcJzYLX7Vb': No such file or directory 00:26:51.674 [2024-04-24 22:18:33.844834] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:26:51.674 [2024-04-24 22:18:33.844865] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:26:51.674 [2024-04-24 22:18:33.844879] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:51.674 [2024-04-24 22:18:33.844892] bdev_nvme.c:6191:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:26:51.674 request: 00:26:51.674 { 00:26:51.674 "name": "nvme0", 00:26:51.674 "trtype": "tcp", 00:26:51.674 "traddr": "127.0.0.1", 00:26:51.674 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:51.674 "adrfam": "ipv4", 00:26:51.674 "trsvcid": "4420", 00:26:51.674 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:51.674 "psk": "key0", 00:26:51.674 "method": "bdev_nvme_attach_controller", 00:26:51.674 "req_id": 1 00:26:51.674 } 00:26:51.674 Got JSON-RPC error response 00:26:51.674 response: 00:26:51.674 { 00:26:51.674 "code": -19, 00:26:51.674 "message": "No such device" 00:26:51.674 } 00:26:51.674 22:18:33 -- common/autotest_common.sh@641 -- # es=1 00:26:51.674 22:18:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:51.674 22:18:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:51.674 22:18:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:51.674 22:18:33 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:26:51.674 22:18:33 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:51.933 22:18:34 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:26:51.933 22:18:34 -- keyring/common.sh@15 -- # local name key digest path 00:26:51.933 22:18:34 -- keyring/common.sh@17 -- # name=key0 00:26:51.933 22:18:34 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:51.933 22:18:34 -- keyring/common.sh@17 -- # digest=0 00:26:51.933 22:18:34 -- keyring/common.sh@18 -- # mktemp 00:26:51.933 22:18:34 -- keyring/common.sh@18 -- # path=/tmp/tmp.dlYGHRnKvo 00:26:51.934 22:18:34 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:51.934 22:18:34 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:51.934 22:18:34 -- nvmf/common.sh@691 -- # local prefix key digest 00:26:51.934 22:18:34 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:26:51.934 22:18:34 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:26:51.934 22:18:34 -- nvmf/common.sh@693 -- # digest=0 00:26:51.934 22:18:34 -- nvmf/common.sh@694 -- # python - 00:26:52.193 22:18:34 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dlYGHRnKvo 00:26:52.193 22:18:34 -- keyring/common.sh@23 -- # echo /tmp/tmp.dlYGHRnKvo 00:26:52.193 22:18:34 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.dlYGHRnKvo 00:26:52.193 22:18:34 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dlYGHRnKvo 00:26:52.193 22:18:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dlYGHRnKvo 00:26:52.452 22:18:34 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:52.452 22:18:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:52.712 nvme0n1 00:26:52.971 22:18:34 -- keyring/file.sh@99 -- # get_refcnt key0 00:26:52.971 22:18:34 -- keyring/common.sh@12 -- # get_key key0 00:26:52.971 22:18:34 -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:52.971 22:18:34 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:52.971 22:18:34 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:52.971 22:18:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:53.230 22:18:35 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:26:53.230 22:18:35 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:26:53.230 22:18:35 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:53.489 22:18:35 -- keyring/file.sh@101 -- # get_key key0 00:26:53.489 22:18:35 -- keyring/file.sh@101 -- # jq -r .removed 00:26:53.489 22:18:35 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:53.489 22:18:35 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:53.489 22:18:35 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:53.748 22:18:35 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:26:53.748 22:18:35 -- keyring/file.sh@102 -- # get_refcnt key0 00:26:53.748 22:18:35 -- keyring/common.sh@12 -- # get_key key0 00:26:53.748 22:18:35 -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:53.748 22:18:35 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:53.748 22:18:35 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:53.748 22:18:35 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:54.007 22:18:36 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:26:54.007 22:18:36 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:26:54.007 22:18:36 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:26:54.265 22:18:36 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:26:54.265 22:18:36 -- keyring/file.sh@104 -- # jq length 00:26:54.265 22:18:36 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:54.832 22:18:37 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:26:54.832 22:18:37 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dlYGHRnKvo 00:26:54.832 22:18:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dlYGHRnKvo 00:26:55.090 22:18:37 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.9KWfpadwsk 00:26:55.090 22:18:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.9KWfpadwsk 00:26:55.349 22:18:37 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:55.349 22:18:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:55.629 nvme0n1 00:26:55.629 22:18:37 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:26:55.629 22:18:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:26:56.209 22:18:38 -- keyring/file.sh@112 -- # config='{ 00:26:56.209 "subsystems": [ 00:26:56.209 { 00:26:56.209 "subsystem": "keyring", 00:26:56.209 "config": [ 00:26:56.209 { 00:26:56.209 "method": "keyring_file_add_key", 00:26:56.209 "params": { 00:26:56.209 "name": "key0", 00:26:56.209 "path": "/tmp/tmp.dlYGHRnKvo" 00:26:56.209 } 00:26:56.209 }, 00:26:56.209 { 00:26:56.209 "method": "keyring_file_add_key", 00:26:56.209 "params": { 00:26:56.209 "name": "key1", 00:26:56.209 "path": "/tmp/tmp.9KWfpadwsk" 00:26:56.209 } 00:26:56.209 } 00:26:56.209 ] 00:26:56.209 }, 00:26:56.209 { 00:26:56.209 "subsystem": "iobuf", 00:26:56.209 "config": [ 00:26:56.209 { 00:26:56.209 "method": "iobuf_set_options", 00:26:56.209 "params": { 00:26:56.209 "small_pool_count": 8192, 00:26:56.209 "large_pool_count": 1024, 00:26:56.209 "small_bufsize": 8192, 00:26:56.209 "large_bufsize": 135168 00:26:56.209 } 00:26:56.209 } 00:26:56.209 ] 00:26:56.209 }, 00:26:56.209 { 00:26:56.209 "subsystem": "sock", 00:26:56.209 "config": [ 00:26:56.209 { 00:26:56.209 "method": "sock_impl_set_options", 00:26:56.209 "params": { 00:26:56.209 "impl_name": "posix", 00:26:56.209 "recv_buf_size": 2097152, 00:26:56.209 "send_buf_size": 2097152, 00:26:56.209 "enable_recv_pipe": true, 00:26:56.209 "enable_quickack": false, 00:26:56.209 "enable_placement_id": 0, 00:26:56.209 "enable_zerocopy_send_server": true, 00:26:56.209 "enable_zerocopy_send_client": false, 00:26:56.209 "zerocopy_threshold": 0, 00:26:56.209 "tls_version": 0, 00:26:56.209 "enable_ktls": false 00:26:56.209 } 00:26:56.209 }, 00:26:56.209 { 00:26:56.209 "method": "sock_impl_set_options", 00:26:56.209 "params": { 00:26:56.209 "impl_name": "ssl", 00:26:56.209 "recv_buf_size": 4096, 00:26:56.209 "send_buf_size": 4096, 00:26:56.209 "enable_recv_pipe": true, 00:26:56.209 "enable_quickack": false, 00:26:56.209 "enable_placement_id": 0, 00:26:56.209 "enable_zerocopy_send_server": true, 00:26:56.209 "enable_zerocopy_send_client": false, 00:26:56.210 "zerocopy_threshold": 0, 00:26:56.210 "tls_version": 0, 00:26:56.210 "enable_ktls": false 00:26:56.210 } 00:26:56.210 } 00:26:56.210 ] 00:26:56.210 }, 00:26:56.210 { 00:26:56.210 "subsystem": "vmd", 00:26:56.210 "config": [] 00:26:56.210 }, 00:26:56.210 { 00:26:56.210 "subsystem": "accel", 00:26:56.210 "config": [ 00:26:56.210 { 00:26:56.210 "method": "accel_set_options", 00:26:56.210 "params": { 00:26:56.210 "small_cache_size": 128, 00:26:56.210 "large_cache_size": 16, 00:26:56.210 "task_count": 2048, 00:26:56.210 "sequence_count": 2048, 00:26:56.210 "buf_count": 2048 00:26:56.210 } 00:26:56.210 } 00:26:56.210 ] 00:26:56.210 }, 00:26:56.210 { 00:26:56.210 "subsystem": "bdev", 00:26:56.210 "config": [ 00:26:56.210 { 00:26:56.210 "method": "bdev_set_options", 00:26:56.210 "params": { 00:26:56.210 "bdev_io_pool_size": 65535, 00:26:56.210 "bdev_io_cache_size": 256, 00:26:56.210 "bdev_auto_examine": true, 00:26:56.210 "iobuf_small_cache_size": 128, 00:26:56.210 "iobuf_large_cache_size": 16 00:26:56.210 } 00:26:56.210 }, 00:26:56.210 { 00:26:56.210 "method": "bdev_raid_set_options", 00:26:56.210 "params": { 00:26:56.210 "process_window_size_kb": 1024 00:26:56.210 } 00:26:56.210 }, 00:26:56.210 { 00:26:56.210 "method": "bdev_iscsi_set_options", 00:26:56.210 "params": { 00:26:56.210 "timeout_sec": 30 00:26:56.210 } 00:26:56.210 }, 00:26:56.210 { 00:26:56.210 "method": "bdev_nvme_set_options", 00:26:56.210 "params": { 00:26:56.210 "action_on_timeout": "none", 00:26:56.210 "timeout_us": 0, 00:26:56.210 "timeout_admin_us": 0, 00:26:56.210 "keep_alive_timeout_ms": 10000, 00:26:56.210 "arbitration_burst": 0, 00:26:56.210 "low_priority_weight": 0, 00:26:56.210 "medium_priority_weight": 0, 00:26:56.210 "high_priority_weight": 0, 00:26:56.210 "nvme_adminq_poll_period_us": 10000, 00:26:56.210 "nvme_ioq_poll_period_us": 0, 00:26:56.210 "io_queue_requests": 512, 00:26:56.210 "delay_cmd_submit": true, 00:26:56.210 "transport_retry_count": 4, 00:26:56.210 "bdev_retry_count": 3, 00:26:56.210 "transport_ack_timeout": 0, 00:26:56.210 "ctrlr_loss_timeout_sec": 0, 00:26:56.210 "reconnect_delay_sec": 0, 00:26:56.210 "fast_io_fail_timeout_sec": 0, 00:26:56.210 "disable_auto_failback": false, 00:26:56.210 "generate_uuids": false, 00:26:56.210 "transport_tos": 0, 00:26:56.210 "nvme_error_stat": false, 00:26:56.210 "rdma_srq_size": 0, 00:26:56.210 "io_path_stat": false, 00:26:56.210 "allow_accel_sequence": false, 00:26:56.210 "rdma_max_cq_size": 0, 00:26:56.210 "rdma_cm_event_timeout_ms": 0, 00:26:56.210 "dhchap_digests": [ 00:26:56.210 "sha256", 00:26:56.210 "sha384", 00:26:56.210 "sha512" 00:26:56.210 ], 00:26:56.210 "dhchap_dhgroups": [ 00:26:56.210 "null", 00:26:56.210 "ffdhe2048", 00:26:56.210 "ffdhe3072", 00:26:56.210 "ffdhe4096", 00:26:56.210 "ffdhe6144", 00:26:56.210 "ffdhe8192" 00:26:56.210 ] 00:26:56.210 } 00:26:56.210 }, 00:26:56.210 { 00:26:56.210 "method": "bdev_nvme_attach_controller", 00:26:56.210 "params": { 00:26:56.210 "name": "nvme0", 00:26:56.210 "trtype": "TCP", 00:26:56.210 "adrfam": "IPv4", 00:26:56.210 "traddr": "127.0.0.1", 00:26:56.210 "trsvcid": "4420", 00:26:56.210 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:56.210 "prchk_reftag": false, 00:26:56.210 "prchk_guard": false, 00:26:56.210 "ctrlr_loss_timeout_sec": 0, 00:26:56.210 "reconnect_delay_sec": 0, 00:26:56.210 "fast_io_fail_timeout_sec": 0, 00:26:56.210 "psk": "key0", 00:26:56.210 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:56.210 "hdgst": false, 00:26:56.210 "ddgst": false 00:26:56.210 } 00:26:56.210 }, 00:26:56.210 { 00:26:56.210 "method": "bdev_nvme_set_hotplug", 00:26:56.210 "params": { 00:26:56.210 "period_us": 100000, 00:26:56.210 "enable": false 00:26:56.210 } 00:26:56.210 }, 00:26:56.210 { 00:26:56.210 "method": "bdev_wait_for_examine" 00:26:56.210 } 00:26:56.210 ] 00:26:56.210 }, 00:26:56.210 { 00:26:56.210 "subsystem": "nbd", 00:26:56.210 "config": [] 00:26:56.210 } 00:26:56.210 ] 00:26:56.210 }' 00:26:56.210 22:18:38 -- keyring/file.sh@114 -- # killprocess 4066438 00:26:56.210 22:18:38 -- common/autotest_common.sh@936 -- # '[' -z 4066438 ']' 00:26:56.210 22:18:38 -- common/autotest_common.sh@940 -- # kill -0 4066438 00:26:56.210 22:18:38 -- common/autotest_common.sh@941 -- # uname 00:26:56.210 22:18:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:56.210 22:18:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4066438 00:26:56.210 22:18:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:56.210 22:18:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:56.210 22:18:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4066438' 00:26:56.210 killing process with pid 4066438 00:26:56.210 22:18:38 -- common/autotest_common.sh@955 -- # kill 4066438 00:26:56.210 Received shutdown signal, test time was about 1.000000 seconds 00:26:56.210 00:26:56.210 Latency(us) 00:26:56.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.210 =================================================================================================================== 00:26:56.210 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:56.210 22:18:38 -- common/autotest_common.sh@960 -- # wait 4066438 00:26:56.469 22:18:38 -- keyring/file.sh@117 -- # bperfpid=4068169 00:26:56.469 22:18:38 -- keyring/file.sh@119 -- # waitforlisten 4068169 /var/tmp/bperf.sock 00:26:56.469 22:18:38 -- common/autotest_common.sh@817 -- # '[' -z 4068169 ']' 00:26:56.469 22:18:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:56.469 22:18:38 -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:26:56.469 22:18:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:56.469 22:18:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:56.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:56.469 22:18:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:56.469 22:18:38 -- common/autotest_common.sh@10 -- # set +x 00:26:56.469 22:18:38 -- keyring/file.sh@115 -- # echo '{ 00:26:56.469 "subsystems": [ 00:26:56.469 { 00:26:56.469 "subsystem": "keyring", 00:26:56.469 "config": [ 00:26:56.469 { 00:26:56.469 "method": "keyring_file_add_key", 00:26:56.469 "params": { 00:26:56.469 "name": "key0", 00:26:56.469 "path": "/tmp/tmp.dlYGHRnKvo" 00:26:56.469 } 00:26:56.469 }, 00:26:56.469 { 00:26:56.469 "method": "keyring_file_add_key", 00:26:56.469 "params": { 00:26:56.469 "name": "key1", 00:26:56.469 "path": "/tmp/tmp.9KWfpadwsk" 00:26:56.469 } 00:26:56.469 } 00:26:56.469 ] 00:26:56.469 }, 00:26:56.469 { 00:26:56.469 "subsystem": "iobuf", 00:26:56.469 "config": [ 00:26:56.469 { 00:26:56.469 "method": "iobuf_set_options", 00:26:56.469 "params": { 00:26:56.469 "small_pool_count": 8192, 00:26:56.469 "large_pool_count": 1024, 00:26:56.469 "small_bufsize": 8192, 00:26:56.469 "large_bufsize": 135168 00:26:56.469 } 00:26:56.469 } 00:26:56.469 ] 00:26:56.469 }, 00:26:56.469 { 00:26:56.469 "subsystem": "sock", 00:26:56.469 "config": [ 00:26:56.469 { 00:26:56.469 "method": "sock_impl_set_options", 00:26:56.469 "params": { 00:26:56.469 "impl_name": "posix", 00:26:56.469 "recv_buf_size": 2097152, 00:26:56.469 "send_buf_size": 2097152, 00:26:56.469 "enable_recv_pipe": true, 00:26:56.469 "enable_quickack": false, 00:26:56.469 "enable_placement_id": 0, 00:26:56.469 "enable_zerocopy_send_server": true, 00:26:56.469 "enable_zerocopy_send_client": false, 00:26:56.469 "zerocopy_threshold": 0, 00:26:56.469 "tls_version": 0, 00:26:56.469 "enable_ktls": false 00:26:56.469 } 00:26:56.469 }, 00:26:56.469 { 00:26:56.469 "method": "sock_impl_set_options", 00:26:56.469 "params": { 00:26:56.469 "impl_name": "ssl", 00:26:56.469 "recv_buf_size": 4096, 00:26:56.469 "send_buf_size": 4096, 00:26:56.469 "enable_recv_pipe": true, 00:26:56.469 "enable_quickack": false, 00:26:56.469 "enable_placement_id": 0, 00:26:56.469 "enable_zerocopy_send_server": true, 00:26:56.469 "enable_zerocopy_send_client": false, 00:26:56.469 "zerocopy_threshold": 0, 00:26:56.469 "tls_version": 0, 00:26:56.469 "enable_ktls": false 00:26:56.469 } 00:26:56.469 } 00:26:56.469 ] 00:26:56.469 }, 00:26:56.469 { 00:26:56.469 "subsystem": "vmd", 00:26:56.469 "config": [] 00:26:56.469 }, 00:26:56.469 { 00:26:56.469 "subsystem": "accel", 00:26:56.469 "config": [ 00:26:56.469 { 00:26:56.469 "method": "accel_set_options", 00:26:56.469 "params": { 00:26:56.469 "small_cache_size": 128, 00:26:56.469 "large_cache_size": 16, 00:26:56.469 "task_count": 2048, 00:26:56.469 "sequence_count": 2048, 00:26:56.469 "buf_count": 2048 00:26:56.469 } 00:26:56.469 } 00:26:56.469 ] 00:26:56.469 }, 00:26:56.469 { 00:26:56.469 "subsystem": "bdev", 00:26:56.469 "config": [ 00:26:56.469 { 00:26:56.470 "method": "bdev_set_options", 00:26:56.470 "params": { 00:26:56.470 "bdev_io_pool_size": 65535, 00:26:56.470 "bdev_io_cache_size": 256, 00:26:56.470 "bdev_auto_examine": true, 00:26:56.470 "iobuf_small_cache_size": 128, 00:26:56.470 "iobuf_large_cache_size": 16 00:26:56.470 } 00:26:56.470 }, 00:26:56.470 { 00:26:56.470 "method": "bdev_raid_set_options", 00:26:56.470 "params": { 00:26:56.470 "process_window_size_kb": 1024 00:26:56.470 } 00:26:56.470 }, 00:26:56.470 { 00:26:56.470 "method": "bdev_iscsi_set_options", 00:26:56.470 "params": { 00:26:56.470 "timeout_sec": 30 00:26:56.470 } 00:26:56.470 }, 00:26:56.470 { 00:26:56.470 "method": "bdev_nvme_set_options", 00:26:56.470 "params": { 00:26:56.470 "action_on_timeout": "none", 00:26:56.470 "timeout_us": 0, 00:26:56.470 "timeout_admin_us": 0, 00:26:56.470 "keep_alive_timeout_ms": 10000, 00:26:56.470 "arbitration_burst": 0, 00:26:56.470 "low_priority_weight": 0, 00:26:56.470 "medium_priority_weight": 0, 00:26:56.470 "high_priority_weight": 0, 00:26:56.470 "nvme_adminq_poll_period_us": 10000, 00:26:56.470 "nvme_ioq_poll_period_us": 0, 00:26:56.470 "io_queue_requests": 512, 00:26:56.470 "delay_cmd_submit": true, 00:26:56.470 "transport_retry_count": 4, 00:26:56.470 "bdev_retry_count": 3, 00:26:56.470 "transport_ack_timeout": 0, 00:26:56.470 "ctrlr_loss_timeout_sec": 0, 00:26:56.470 "reconnect_delay_sec": 0, 00:26:56.470 "fast_io_fail_timeout_sec": 0, 00:26:56.470 "disable_auto_failback": false, 00:26:56.470 "generate_uuids": false, 00:26:56.470 "transport_tos": 0, 00:26:56.470 "nvme_error_stat": false, 00:26:56.470 "rdma_srq_size": 0, 00:26:56.470 "io_path_stat": false, 00:26:56.470 "allow_accel_sequence": false, 00:26:56.470 "rdma_max_cq_size": 0, 00:26:56.470 "rdma_cm_event_timeout_ms": 0, 00:26:56.470 "dhchap_digests": [ 00:26:56.470 "sha256", 00:26:56.470 "sha384", 00:26:56.470 "sha512" 00:26:56.470 ], 00:26:56.470 "dhchap_dhgroups": [ 00:26:56.470 "null", 00:26:56.470 "ffdhe2048", 00:26:56.470 "ffdhe3072", 00:26:56.470 "ffdhe4096", 00:26:56.470 "ffdhe6144", 00:26:56.470 "ffdhe8192" 00:26:56.470 ] 00:26:56.470 } 00:26:56.470 }, 00:26:56.470 { 00:26:56.470 "method": "bdev_nvme_attach_controller", 00:26:56.470 "params": { 00:26:56.470 "name": "nvme0", 00:26:56.470 "trtype": "TCP", 00:26:56.470 "adrfam": "IPv4", 00:26:56.470 "traddr": "127.0.0.1", 00:26:56.470 "trsvcid": "4420", 00:26:56.470 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:56.470 "prchk_reftag": false, 00:26:56.470 "prchk_guard": false, 00:26:56.470 "ctrlr_loss_timeout_sec": 0, 00:26:56.470 "reconnect_delay_sec": 0, 00:26:56.470 "fast_io_fail_timeout_sec": 0, 00:26:56.470 "psk": "key0", 00:26:56.470 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:56.470 "hdgst": false, 00:26:56.470 "ddgst": false 00:26:56.470 } 00:26:56.470 }, 00:26:56.470 { 00:26:56.470 "method": "bdev_nvme_set_hotplug", 00:26:56.470 "params": { 00:26:56.470 "period_us": 100000, 00:26:56.470 "enable": false 00:26:56.470 } 00:26:56.470 }, 00:26:56.470 { 00:26:56.470 "method": "bdev_wait_for_examine" 00:26:56.470 } 00:26:56.470 ] 00:26:56.470 }, 00:26:56.470 { 00:26:56.470 "subsystem": "nbd", 00:26:56.470 "config": [] 00:26:56.470 } 00:26:56.470 ] 00:26:56.470 }' 00:26:56.470 [2024-04-24 22:18:38.584689] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:26:56.470 [2024-04-24 22:18:38.584775] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4068169 ] 00:26:56.470 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.470 [2024-04-24 22:18:38.652258] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.730 [2024-04-24 22:18:38.770342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.730 [2024-04-24 22:18:38.957466] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:57.665 22:18:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:57.665 22:18:39 -- common/autotest_common.sh@850 -- # return 0 00:26:57.665 22:18:39 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:26:57.665 22:18:39 -- keyring/file.sh@120 -- # jq length 00:26:57.665 22:18:39 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:57.923 22:18:40 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:26:57.923 22:18:40 -- keyring/file.sh@121 -- # get_refcnt key0 00:26:57.923 22:18:40 -- keyring/common.sh@12 -- # get_key key0 00:26:57.923 22:18:40 -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:57.923 22:18:40 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:57.923 22:18:40 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:57.923 22:18:40 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:58.490 22:18:40 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:26:58.490 22:18:40 -- keyring/file.sh@122 -- # get_refcnt key1 00:26:58.490 22:18:40 -- keyring/common.sh@12 -- # get_key key1 00:26:58.490 22:18:40 -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:58.490 22:18:40 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:58.490 22:18:40 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:58.490 22:18:40 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:58.748 22:18:40 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:26:58.748 22:18:40 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:26:58.748 22:18:40 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:26:58.748 22:18:40 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:26:59.316 22:18:41 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:26:59.316 22:18:41 -- keyring/file.sh@1 -- # cleanup 00:26:59.316 22:18:41 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.dlYGHRnKvo /tmp/tmp.9KWfpadwsk 00:26:59.316 22:18:41 -- keyring/file.sh@20 -- # killprocess 4068169 00:26:59.316 22:18:41 -- common/autotest_common.sh@936 -- # '[' -z 4068169 ']' 00:26:59.316 22:18:41 -- common/autotest_common.sh@940 -- # kill -0 4068169 00:26:59.316 22:18:41 -- common/autotest_common.sh@941 -- # uname 00:26:59.316 22:18:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:59.316 22:18:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4068169 00:26:59.316 22:18:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:59.316 22:18:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:59.316 22:18:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4068169' 00:26:59.316 killing process with pid 4068169 00:26:59.316 22:18:41 -- common/autotest_common.sh@955 -- # kill 4068169 00:26:59.316 Received shutdown signal, test time was about 1.000000 seconds 00:26:59.316 00:26:59.316 Latency(us) 00:26:59.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:59.316 =================================================================================================================== 00:26:59.316 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:59.316 22:18:41 -- common/autotest_common.sh@960 -- # wait 4068169 00:26:59.575 22:18:41 -- keyring/file.sh@21 -- # killprocess 4066428 00:26:59.575 22:18:41 -- common/autotest_common.sh@936 -- # '[' -z 4066428 ']' 00:26:59.575 22:18:41 -- common/autotest_common.sh@940 -- # kill -0 4066428 00:26:59.575 22:18:41 -- common/autotest_common.sh@941 -- # uname 00:26:59.575 22:18:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:59.575 22:18:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4066428 00:26:59.575 22:18:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:59.575 22:18:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:59.575 22:18:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4066428' 00:26:59.575 killing process with pid 4066428 00:26:59.575 22:18:41 -- common/autotest_common.sh@955 -- # kill 4066428 00:26:59.575 [2024-04-24 22:18:41.628988] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:59.575 [2024-04-24 22:18:41.629049] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:59.575 22:18:41 -- common/autotest_common.sh@960 -- # wait 4066428 00:27:00.142 00:27:00.142 real 0m17.805s 00:27:00.142 user 0m45.316s 00:27:00.142 sys 0m3.848s 00:27:00.142 22:18:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:00.142 22:18:42 -- common/autotest_common.sh@10 -- # set +x 00:27:00.142 ************************************ 00:27:00.142 END TEST keyring_file 00:27:00.142 ************************************ 00:27:00.142 22:18:42 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:27:00.142 22:18:42 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:27:00.142 22:18:42 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:27:00.142 22:18:42 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:27:00.142 22:18:42 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:27:00.142 22:18:42 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:27:00.142 22:18:42 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:27:00.142 22:18:42 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:27:00.142 22:18:42 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:27:00.142 22:18:42 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:27:00.142 22:18:42 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:27:00.142 22:18:42 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:27:00.142 22:18:42 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:27:00.142 22:18:42 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:27:00.142 22:18:42 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:27:00.142 22:18:42 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:27:00.142 22:18:42 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:27:00.142 22:18:42 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:27:00.142 22:18:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:00.142 22:18:42 -- common/autotest_common.sh@10 -- # set +x 00:27:00.142 22:18:42 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:27:00.142 22:18:42 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:27:00.142 22:18:42 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:27:00.142 22:18:42 -- common/autotest_common.sh@10 -- # set +x 00:27:02.051 INFO: APP EXITING 00:27:02.051 INFO: killing all VMs 00:27:02.051 INFO: killing vhost app 00:27:02.051 INFO: EXIT DONE 00:27:03.429 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:27:03.429 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:27:03.429 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:27:03.429 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:27:03.429 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:27:03.429 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:27:03.429 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:27:03.429 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:27:03.429 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:27:03.429 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:27:03.429 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:27:03.429 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:27:03.429 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:27:03.686 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:27:03.686 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:27:03.686 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:27:03.686 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:27:05.061 Cleaning 00:27:05.061 Removing: /var/run/dpdk/spdk0/config 00:27:05.061 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:05.061 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:05.061 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:05.061 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:05.061 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:27:05.061 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:27:05.061 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:27:05.061 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:27:05.061 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:05.061 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:05.061 Removing: /var/run/dpdk/spdk1/config 00:27:05.061 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:05.061 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:05.061 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:05.061 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:05.061 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:27:05.061 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:27:05.061 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:27:05.061 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:27:05.061 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:05.061 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:05.061 Removing: /var/run/dpdk/spdk1/mp_socket 00:27:05.061 Removing: /var/run/dpdk/spdk2/config 00:27:05.061 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:05.061 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:05.061 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:05.061 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:05.061 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:27:05.061 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:27:05.061 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:27:05.061 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:27:05.061 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:05.061 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:05.061 Removing: /var/run/dpdk/spdk3/config 00:27:05.061 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:05.061 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:05.061 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:05.061 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:05.061 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:27:05.061 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:27:05.061 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:27:05.061 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:27:05.061 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:05.061 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:05.061 Removing: /var/run/dpdk/spdk4/config 00:27:05.061 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:05.061 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:05.061 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:05.061 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:05.061 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:27:05.061 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:27:05.061 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:27:05.061 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:27:05.061 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:05.061 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:05.061 Removing: /dev/shm/bdev_svc_trace.1 00:27:05.061 Removing: /dev/shm/nvmf_trace.0 00:27:05.061 Removing: /dev/shm/spdk_tgt_trace.pid3832691 00:27:05.061 Removing: /var/run/dpdk/spdk0 00:27:05.061 Removing: /var/run/dpdk/spdk1 00:27:05.061 Removing: /var/run/dpdk/spdk2 00:27:05.061 Removing: /var/run/dpdk/spdk3 00:27:05.061 Removing: /var/run/dpdk/spdk4 00:27:05.061 Removing: /var/run/dpdk/spdk_pid3830882 00:27:05.061 Removing: /var/run/dpdk/spdk_pid3831732 00:27:05.061 Removing: /var/run/dpdk/spdk_pid3832691 00:27:05.061 Removing: /var/run/dpdk/spdk_pid3833302 00:27:05.061 Removing: /var/run/dpdk/spdk_pid3833999 00:27:05.061 Removing: /var/run/dpdk/spdk_pid3834141 00:27:05.062 Removing: /var/run/dpdk/spdk_pid3834871 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3834892 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3835256 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3836714 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3837762 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3838086 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3838414 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3838630 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3838958 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3839125 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3839285 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3839591 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3840192 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3843084 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3843262 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3843428 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3843561 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3844010 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3844133 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3844576 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3844579 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3844880 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3845008 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3845186 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3845197 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3845709 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3845990 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3846193 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3846503 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3846541 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3846752 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3847032 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3847191 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3847413 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3847639 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3847813 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3848094 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3848251 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3848540 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3848703 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3848872 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3849151 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3849310 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3849597 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3849760 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3850036 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3850212 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3850385 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3850661 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3850827 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3851110 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3851246 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3851532 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3853746 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3880738 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3883534 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3889362 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3892680 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3895151 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3895634 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3903758 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3903764 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3904301 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3904951 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3905559 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3905988 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3906006 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3906148 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3906284 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3906288 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3906947 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3907594 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3908131 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3908526 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3908598 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3908790 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3909826 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3910665 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3916192 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3916463 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3919263 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3923109 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3926054 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3932744 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3938114 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3939310 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3939991 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3950505 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3952749 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3955816 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3956995 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3958310 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3958453 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3958594 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3958731 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3959292 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3961240 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3962102 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3962657 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3964275 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3964705 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3965273 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3967938 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3974005 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3976659 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3980560 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3981635 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3982743 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3985384 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3987772 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3992086 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3992163 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3995249 00:27:05.320 Removing: /var/run/dpdk/spdk_pid3995385 00:27:05.321 Removing: /var/run/dpdk/spdk_pid3995524 00:27:05.321 Removing: /var/run/dpdk/spdk_pid3996268 00:27:05.321 Removing: /var/run/dpdk/spdk_pid3996303 00:27:05.321 Removing: /var/run/dpdk/spdk_pid3999062 00:27:05.321 Removing: /var/run/dpdk/spdk_pid3999402 00:27:05.321 Removing: /var/run/dpdk/spdk_pid4002094 00:27:05.321 Removing: /var/run/dpdk/spdk_pid4004068 00:27:05.321 Removing: /var/run/dpdk/spdk_pid4007763 00:27:05.321 Removing: /var/run/dpdk/spdk_pid4011092 00:27:05.321 Removing: /var/run/dpdk/spdk_pid4015468 00:27:05.321 Removing: /var/run/dpdk/spdk_pid4015477 00:27:05.321 Removing: /var/run/dpdk/spdk_pid4028208 00:27:05.579 Removing: /var/run/dpdk/spdk_pid4028629 00:27:05.579 Removing: /var/run/dpdk/spdk_pid4029160 00:27:05.579 Removing: /var/run/dpdk/spdk_pid4029695 00:27:05.579 Removing: /var/run/dpdk/spdk_pid4030393 00:27:05.579 Removing: /var/run/dpdk/spdk_pid4030925 00:27:05.579 Removing: /var/run/dpdk/spdk_pid4031850 00:27:05.579 Removing: /var/run/dpdk/spdk_pid4032380 00:27:05.579 Removing: /var/run/dpdk/spdk_pid4035026 00:27:05.579 Removing: /var/run/dpdk/spdk_pid4035168 00:27:05.579 Removing: /var/run/dpdk/spdk_pid4039110 00:27:05.579 Removing: /var/run/dpdk/spdk_pid4039190 00:27:05.579 Removing: /var/run/dpdk/spdk_pid4040893 00:27:05.579 Removing: /var/run/dpdk/spdk_pid4045960 00:27:05.579 Removing: /var/run/dpdk/spdk_pid4045965 00:27:05.579 Removing: /var/run/dpdk/spdk_pid4048908 00:27:05.579 Removing: /var/run/dpdk/spdk_pid4050314 00:27:05.580 Removing: /var/run/dpdk/spdk_pid4051719 00:27:05.580 Removing: /var/run/dpdk/spdk_pid4052590 00:27:05.580 Removing: /var/run/dpdk/spdk_pid4054062 00:27:05.580 Removing: /var/run/dpdk/spdk_pid4054884 00:27:05.580 Removing: /var/run/dpdk/spdk_pid4060389 00:27:05.580 Removing: /var/run/dpdk/spdk_pid4060712 00:27:05.580 Removing: /var/run/dpdk/spdk_pid4061152 00:27:05.580 Removing: /var/run/dpdk/spdk_pid4062832 00:27:05.580 Removing: /var/run/dpdk/spdk_pid4063579 00:27:05.580 Removing: /var/run/dpdk/spdk_pid4063966 00:27:05.580 Removing: /var/run/dpdk/spdk_pid4066428 00:27:05.580 Removing: /var/run/dpdk/spdk_pid4066438 00:27:05.580 Removing: /var/run/dpdk/spdk_pid4068169 00:27:05.580 Clean 00:27:05.580 22:18:47 -- common/autotest_common.sh@1437 -- # return 0 00:27:05.580 22:18:47 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:27:05.580 22:18:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:05.580 22:18:47 -- common/autotest_common.sh@10 -- # set +x 00:27:05.580 22:18:47 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:27:05.580 22:18:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:05.580 22:18:47 -- common/autotest_common.sh@10 -- # set +x 00:27:05.838 22:18:47 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:27:05.838 22:18:47 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:27:05.838 22:18:47 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:27:05.838 22:18:47 -- spdk/autotest.sh@389 -- # hash lcov 00:27:05.838 22:18:47 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:05.838 22:18:47 -- spdk/autotest.sh@391 -- # hostname 00:27:05.838 22:18:47 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:27:05.838 geninfo: WARNING: invalid characters removed from testname! 00:27:44.540 22:19:22 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:47.068 22:19:28 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:51.285 22:19:33 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:55.469 22:19:37 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:00.739 22:19:42 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:04.922 22:19:46 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:09.103 22:19:51 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:09.103 22:19:51 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:09.103 22:19:51 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:09.103 22:19:51 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:09.103 22:19:51 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:09.103 22:19:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.103 22:19:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.103 22:19:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.103 22:19:51 -- paths/export.sh@5 -- $ export PATH 00:28:09.104 22:19:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.104 22:19:51 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:28:09.104 22:19:51 -- common/autobuild_common.sh@435 -- $ date +%s 00:28:09.104 22:19:51 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713989991.XXXXXX 00:28:09.104 22:19:51 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713989991.OS4o5h 00:28:09.104 22:19:51 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:28:09.104 22:19:51 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:28:09.104 22:19:51 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:28:09.104 22:19:51 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:28:09.104 22:19:51 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:28:09.104 22:19:51 -- common/autobuild_common.sh@451 -- $ get_config_params 00:28:09.104 22:19:51 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:28:09.104 22:19:51 -- common/autotest_common.sh@10 -- $ set +x 00:28:09.104 22:19:51 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:28:09.104 22:19:51 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:28:09.104 22:19:51 -- pm/common@17 -- $ local monitor 00:28:09.104 22:19:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:09.104 22:19:51 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=4077273 00:28:09.104 22:19:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:09.104 22:19:51 -- pm/common@21 -- $ date +%s 00:28:09.104 22:19:51 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=4077275 00:28:09.104 22:19:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:09.104 22:19:51 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=4077278 00:28:09.104 22:19:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:09.104 22:19:51 -- pm/common@21 -- $ date +%s 00:28:09.104 22:19:51 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=4077280 00:28:09.104 22:19:51 -- pm/common@26 -- $ sleep 1 00:28:09.104 22:19:51 -- pm/common@21 -- $ date +%s 00:28:09.104 22:19:51 -- pm/common@21 -- $ date +%s 00:28:09.104 22:19:51 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713989991 00:28:09.104 22:19:51 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713989991 00:28:09.104 22:19:51 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713989991 00:28:09.104 22:19:51 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713989991 00:28:09.104 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713989991_collect-cpu-load.pm.log 00:28:09.104 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713989991_collect-vmstat.pm.log 00:28:09.104 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713989991_collect-cpu-temp.pm.log 00:28:09.104 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713989991_collect-bmc-pm.bmc.pm.log 00:28:10.038 22:19:52 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:28:10.038 22:19:52 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:28:10.038 22:19:52 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:10.038 22:19:52 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:10.038 22:19:52 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:10.038 22:19:52 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:10.038 22:19:52 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:10.038 22:19:52 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:10.038 22:19:52 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:10.038 22:19:52 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:10.038 22:19:52 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:10.038 22:19:52 -- pm/common@30 -- $ signal_monitor_resources TERM 00:28:10.038 22:19:52 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:28:10.038 22:19:52 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:10.038 22:19:52 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:28:10.038 22:19:52 -- pm/common@45 -- $ pid=4077298 00:28:10.038 22:19:52 -- pm/common@52 -- $ sudo kill -TERM 4077298 00:28:10.038 22:19:52 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:10.038 22:19:52 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:28:10.038 22:19:52 -- pm/common@45 -- $ pid=4077300 00:28:10.038 22:19:52 -- pm/common@52 -- $ sudo kill -TERM 4077300 00:28:10.296 22:19:52 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:10.296 22:19:52 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:28:10.296 22:19:52 -- pm/common@45 -- $ pid=4077299 00:28:10.296 22:19:52 -- pm/common@52 -- $ sudo kill -TERM 4077299 00:28:10.296 22:19:52 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:10.296 22:19:52 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:28:10.296 22:19:52 -- pm/common@45 -- $ pid=4077303 00:28:10.296 22:19:52 -- pm/common@52 -- $ sudo kill -TERM 4077303 00:28:10.296 + [[ -n 3739668 ]] 00:28:10.296 + sudo kill 3739668 00:28:10.305 [Pipeline] } 00:28:10.322 [Pipeline] // stage 00:28:10.327 [Pipeline] } 00:28:10.345 [Pipeline] // timeout 00:28:10.350 [Pipeline] } 00:28:10.367 [Pipeline] // catchError 00:28:10.371 [Pipeline] } 00:28:10.388 [Pipeline] // wrap 00:28:10.393 [Pipeline] } 00:28:10.405 [Pipeline] // catchError 00:28:10.412 [Pipeline] stage 00:28:10.414 [Pipeline] { (Epilogue) 00:28:10.424 [Pipeline] catchError 00:28:10.425 [Pipeline] { 00:28:10.438 [Pipeline] echo 00:28:10.440 Cleanup processes 00:28:10.445 [Pipeline] sh 00:28:10.722 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:10.722 4077418 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:28:10.722 4077562 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:10.730 [Pipeline] sh 00:28:11.002 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:11.002 ++ grep -v 'sudo pgrep' 00:28:11.003 ++ awk '{print $1}' 00:28:11.003 + sudo kill -9 4077418 00:28:11.037 [Pipeline] sh 00:28:11.341 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:29.423 [Pipeline] sh 00:28:29.706 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:29.706 Artifacts sizes are good 00:28:29.721 [Pipeline] archiveArtifacts 00:28:29.728 Archiving artifacts 00:28:29.915 [Pipeline] sh 00:28:30.197 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:28:30.775 [Pipeline] cleanWs 00:28:30.783 [WS-CLEANUP] Deleting project workspace... 00:28:30.783 [WS-CLEANUP] Deferred wipeout is used... 00:28:30.788 [WS-CLEANUP] done 00:28:30.789 [Pipeline] } 00:28:30.802 [Pipeline] // catchError 00:28:30.809 [Pipeline] sh 00:28:31.084 + logger -p user.info -t JENKINS-CI 00:28:31.093 [Pipeline] } 00:28:31.110 [Pipeline] // stage 00:28:31.116 [Pipeline] } 00:28:31.134 [Pipeline] // node 00:28:31.140 [Pipeline] End of Pipeline 00:28:31.180 Finished: SUCCESS